Skip to main content
The Inside View

The Inside View

By Michaël Trazzi

About AI progress. You can watch the video recordings and check out the transcripts at theinsideview.ai
Where to listen
Apple Podcasts Logo

Apple Podcasts

Google Podcasts Logo

Google Podcasts

Pocket Casts Logo

Pocket Casts

RadioPublic Logo

RadioPublic

Spotify Logo

Spotify

Stitcher Logo

Stitcher

Currently playing episode

7. Phil Trammell on Economic Growth under Transformative AI

The Inside View

1x
Collin Burns On Discovering Latent Knowledge In Language Models Without Supervision
Collin Burns On Discovering Latent Knowledge In Language Models Without Supervision
Collin Burns is a second-year ML PhD at Berkeley, working with Jacob Steinhardt on making language models honest, interpretable, and aligned. In 2015 he broke the Rubik’s Cube world record, and he's now back with "Discovering latent knowledge in language models without supervision", a paper on how you can recover diverse knowledge represented in large language models without supervision.   Transcript: https://theinsideview.ai/collin  Paper: https://arxiv.org/abs/2212.03827 Lesswrong post: https://bit.ly/3kbyZML Host: https://twitter.com/MichaelTrazzi Collin: https://twitter.com/collinburns4 OUTLINE (00:22) Intro (01:33) Breaking The Rubik's Cube World Record (03:03) A Permutation That Happens Maybe 2% Of The Time (05:01) How Collin Became Convinced Of AI Alignment (07:55)  Was Minerva Just Low Hanging Fruits On MATH From Scaling? (12:47) IMO Gold Medal By 2026? How to update from AI Progress (17:03) Plausibly Automating AI Research In The Next Five Years (24:23) Making LLMs Say The Truth (28:11) Lying Is Already Incentivized As We Have Seend With Diplomacy (32:29) Mind Reading On 'Brain Scans' Through Logical Consistency (35:18) Misalignment, Or Why One Does Not Simply Prompt A Model Into Being Truthful (38:43) Classifying Hidden States, Maybe Using Truth Features Reepresented Linearly (44:48) Building A Dataset For Using Logical Consistency (50:16) Building A Confident And Consistent Classifier That Outputs Probabilities (53:25) Discovering Representations Of The Truth From Just Being Confident And Consistent (57:18) Making Models Truthful As A Sufficient Condition For Alignment (59:02) Classifcation From Hidden States Outperforms Zero-Shot Prompting Accuracy (01:02:27) Recovering Latent Knowledge From Hidden States Is Robust To Incorrect Answers In Few-Shot Prompts (01:09:04) Would A Superhuman GPT-N Predict Future News Articles (01:13:09) Asking Models To Optimize Money Without Breaking The Law (01:20:31) Training Competitive Models From Human Feedback That We Can Evaluate (01:27:26) Alignment Problems On Current Models Are Already Hard (01:29:19) We Should Have More People Working On New Agendas From First Principles (01:37:16) Towards Grounded Theoretical Work And Empirical Work Targeting Future Systems (01:41:52) There Is No True Unsupervised: Autoregressive Models Depend On What A Human Would Say (01:46:04) Simulating Aligned Systems And Recovering The Persona Of A Language Model (01:51:38) The Truth Is Somewhere Inside The Model, Differentiating Between Truth And Persona Bit by Bit Through Constraints (02:01:08) A Misaligned Model Would Have Activations Correlated With Lying (02:05:16) Exploiting Similar Structure To Logical Consistency With Unaligned Models (02:07:07) Aiming For Honesty, Not Truthfulness (02:11:15) Limitations Of Collin's Paper (02:14:12) The Paper Does Not Show The Complete Final Robust Method For This Problem (02:17:26) Humans Will Be 50/50 On Superhuman Questions (02:23:40) Asking Yourself "Why Am I Optimistic" and How Collin Approaches Research (02:29:16) Message To The ML and Cubing audience
02:34:39
January 17, 2023
Victoria Krakovna–AGI Ruin, Sharp Left Turn, Paradigms of AI Alignment
Victoria Krakovna–AGI Ruin, Sharp Left Turn, Paradigms of AI Alignment
Victoria Krakovna is a Research Scientist at DeepMind working on AGI safety and a co-founder of the Future of Life Institute, a non-profit organization working to mitigate technological risks to humanity and increase the chances of a positive future. In this interview we discuss three of her recent LW posts, namely DeepMind Alignment Team Opinions On AGI Ruin Arguments, Refining The Sharp Left Turn Threat Model and Paradigms of AI Alignment. Transcript: theinsideview.ai/victoria Youtube: https://youtu.be/ZpwSNiLV-nw OUTLINE (00:00) Intro (00:48) DeepMind Alignment Team Opinions On AGI Ruin Arguments (05:13) On The Possibility Of Iterating On Dangerous Domains and Pivotal acts (14:14) Alignment and Interpretability (18:14) Deciding Not To Build AGI And Stricted Publication norms (27:18) Specification Gaming And Goal Misgeneralization (33:02) Alignment Optimism And Probability Of Dying Before 2100 From unaligned AI (37:52) Refining The Sharp Left Turn Threat Model (48:15) A 'Move 37' Might Disempower Humanity (59:59) Finding An Aligned Model Before A Sharp Left Turn (01:13:33) Detecting Situational Awarareness (01:19:40) How This Could Fail, Deception After One SGD Step (01:25:09) Paradigms of AI Alignment (01:38:04) Language Models Simulating Agency And Goals (01:45:40) Twitter Questions (01:48:30) Last Message For The ML Community
01:52:26
January 12, 2023
David Krueger–Coordination, Alignment, Academia
David Krueger–Coordination, Alignment, Academia
David Krueger is an assistant professor at the University of Cambridge and got his PhD from Mila. His research group focuses on aligning deep learning systems, but he is also interested in governance and global coordination. He is famous in Cambridge for not having an AI alignment research agenda per se, and instead he tries to enable his seven PhD students to drive their own research. In this episode we discuss AI Takeoff scenarios, research going on at David's lab, Coordination, Governance, Causality, the public perception of AI Alignment research and how to change it. Youtube: https://youtu.be/bDMqo7BpNbk Transcript: https://theinsideview.ai/david OUTLINE (00:00) Highlights (01:06) Incentivized Behaviors and Takeoff Speeds (17:53) Building Models That Understand Causality (31:04) Agency, Acausal Trade And Causality in LLMs (40:44) Recursive Self Improvement, Bitter Lesson And Alignment (01:03:17) AI Governance And Coordination (01:13:26) David’s AI Alignment Research Lab and the Existential Safety Community (01:24:13) On The Public Perception of AI Alignment (01:35:58) How To Get People In Academia To Work on Alignment (02:00:19) Decomposing Learning Curves, Latest Research From David Krueger’s Lab (02:20:06) Safety-Performance Trade-Offs (02:30:20) Defining And Characterizing Reward Hacking (02:40:51) Playing Poker With Ethan Caballero, Timelines
02:45:20
January 07, 2023
Ethan Caballero–Broken Neural Scaling Laws
Ethan Caballero–Broken Neural Scaling Laws
Ethan Caballero is a PhD student at Mila interested in how to best scale Deep Learning models according to all downstream evaluations that matter. He is known as the fearless leader of the "Scale Is All You Need" movement and the edgiest person at MILA. His first interview is the second most popular interview on the channel and today he's back to talk about Broken Neural Scaling Laws and how to use them to superforecast AGI. Youtube: https://youtu.be/SV87S38M1J4 Transcript: https://theinsideview.ai/ethan2 OUTLINE (00:00) The Albert Einstein Of Scaling (00:50) The Fearless Leader Of The Scale Is All You Need Movement (01:07) A Functional Form Predicting Every Scaling Behavior (01:40) A Break Between Two Straight Lines On A Log Log Plot (02:32) The Broken Neural Scaling Laws Equation (04:04) Extrapolating A Ton Of Large Scale Vision And Language Tasks (04:49) Upstream And Downstream Have Different Breaks (05:22) Extrapolating Four Digit Addition Performance (06:11) On The Feasability Of Running Enough Training Runs (06:31) Predicting Sharp Left Turns (07:51) Modeling Double Descent (08:41) Forecasting Interpretability And Controllability (09:33) How Deception Might Happen In Practice (10:24) Sinister Stumbles And Treacherous Turns (11:18) Recursive Self Improvement Precedes Sinister Stumbles (11:51) Humans In The Loop For The Very First Deception (12:32) The Hardware Stuff Is Going To Come After The Software Stuff (12:57) Distributing Your Training By Copy-Pasting Yourself Into Different Servers (13:42) Automating The Entire Hardware Pipeline (14:47) Having Text AGI Spit Out New Robotics Design (16:33) The Case For Existential Risk From AI (18:32) Git Re-basin (18:54) Is Chain-Of-Thoughts Enough For Complex Reasoning In LMs? (19:52) Why Diffusion Models Outperform Other Generative Models (21:13) Using Whisper To Train GPT4 (22:33) Text To Video Was Only Slightly Impressive (23:29) Last Message
23:48
November 03, 2022
Irina Rish–AGI, Scaling and Alignment
Irina Rish–AGI, Scaling and Alignment
Irina Rish a professor at the Université de Montréal, a core member of Mila (Quebec AI Institute), and the organizer of the neural scaling laws workshop towards maximally beneficial AGI. In this episode we discuss Irina's definition of Artificial General Intelligence, her takes on AI Alignment, AI Progress, current research in scaling laws, the neural scaling laws workshop she has been organizing, phase transitions, continual learning, existential risk from AI and what is currently happening in AI Alignment at Mila. Transcript: theinsideview.ai/irina Youtube: https://youtu.be/ZwvJn4x714s OUTLINE (00:00) Highlights (00:30) Introduction (01:03) Defining AGI (03:55) AGI means augmented human intelligence (06:20) Solving alignment via AI parenting (09:03) From the early days of deep learning to general agents (13:27) How Irina updated from Gato (17:36) Building truly general AI within Irina's lifetime (19:38) The least impressive thing that won't happen in five years (22:36) Scaling beyond power laws (28:45) The neural scaling laws workshop (35:07) Why Irina does not want to slow down AI progress (53:52) Phase transitions and grokking (01:02:26) Does scale solve continual learning? (01:11:10) Irina's probability of existential risk from AGI (01:14:53) Alignment work at Mila (01:20:08) Where will Mila get its compute from? (01:27:04) With Great Compute Comes Great Responsibility (01:28:51) The Neural Scaling Laws Workshop At NeurIPS
01:26:07
October 18, 2022
Shahar Avin–Intelligence Rising, AI Governance
Shahar Avin–Intelligence Rising, AI Governance
Shahar is a senior researcher at the Center for the Study of Existential Risk in Cambridge. In his past life, he was a Google Engineer, though right now he spends most of your time thinking about how to prevent the risks that occur if companies like Google end up deploying powerful AI systems, by organizing AI Governance role-playing workshops. In this episode, we talk about a broad variety of topics, including how we could apply the lessons from running AI Governance workshops to governing transformative AI, AI Strategy, AI Governance, Trustworthy AI Development and end up answering some Twitter questions. Youtube: https://youtu.be/3T7Gpwhtc6Q Transcript: https://theinsideview.ai/shahar Host: https://twitter.com/MichaelTrazzi Shahar: https://www.shaharavin.com Outline (00:00) Highlights (01:20) Intelligence Rising (06:07) Measuring Transformative AI By The Scale Of Its Impact (08:09) Comprehensive AI Services (11:38) Automating CEOs Through AI Services (14:21) Towards A "Tech Company Singularity" (15:58) Predicting AI Is Like Predicting The Industrial Revolution (19:57) 50% Chance Of Human-brain Performance By 2038 (22:25) AI Alignment Is About Steering Powerful Systems Towards Valuable Worlds (23:51) You Should Still Worry About Less Agential Systems (28:07) AI Strategy Needs To Be Tested In The Real World To Not Become Theoretical Physics (31:37) Playing War Games For Real-time Partial-information Advesarial Thinking (34:50) Towards World Leaders Playing The Game Because It’s Useful (39:31) Open Game, Cybersecurity, Government Spending, Hard And Soft Power (45:21) How Cybersecurity, Hard-power Or Soft-power Could Lead To A Strategic Advantage (48:58) Cybersecurity In A World Of Advanced AI Systems (52:50) Allocating AI Talent For Positive R&D ROI (57:25) Players Learn To Cooperate And Defect (01:00:10) Can You Actually Tax Tech Companies? (01:02:10) The Emergence Of Bilateral Agreements And Technology Bans (01:03:22) AI Labs Might Not Be Showing All Of Their Cards (01:06:34) Why Publish AI Research (01:09:21) Should You Expect Actors To Build Safety Features Before Crunch Time (01:12:39) Why Tech Companies And Governments Will Be The Decisive Players (01:14:29) Regulations Need To Happen Before The Explosion, Not After (01:16:55) Early Regulation Could Become Locked In (01:20:00) What Incentives Do Companies Have To Regulate? (01:23:06) Why Shahar Is Terrified Of AI DAOs (01:27:33) Concrete Mechanisms To Tell Apart Who We Should Trust With Building Advanced AI Systems (01:31:19) Increasing Privacy To Build Trust (01:33:37) Sensibilizing To Privacy Through Federated Learning (01:35:23) How To Motivate AI Regulations (01:37:44) How Governments Could Start Caring About AI risk (01:39:12) Attempts To Regulate Autonomous Weapons Have Not Resulted In A Ban (01:40:58) We Should Start By Convincing The Department Of Defense (01:42:08) Medical Device Regulations Might Be A Good Model Audits (01:46:09) Alignment Red Tape And Misalignment Fines (01:46:53) Red Teaming AI systems (01:49:12) Red Teaming May Not Extend To Advanced AI Systems (01:51:26) What Climate change Teaches Us About AI Strategy (01:55:16) Can We Actually Regulate Compute (01:57:01) How Feasible Are Shutdown Swi
02:04:41
September 23, 2022
Katja Grace on Slowing Down AI, AI Expert Surveys And Estimating AI Risk
Katja Grace on Slowing Down AI, AI Expert Surveys And Estimating AI Risk
Katja runs AI Impacts, a research project trying to incrementally answer decision-relevant questions about the future of AI. She is well known for a survey published in 2017 called, When Will AI Exceed Human Performance? Evidence From AI Experts and recently published a new survey of AI Experts: What do ML researchers think about AI in 2022. We start this episode by discussing what Katja is currently thinking about, namely an answer to Scott Alexander on why slowing down AI Progress is an underexplored path to impact.  Youtube: https://youtu.be/rSw3UVDZge0 Audio & Transcript: https://theinsideview.ai/katja Host: https://twitter.com/MichaelTrazzi Katja: https://twitter.com/katjagrace OUTLINE (00:00) Highlights (00:58) Intro (01:33) Why Advocating For Slowing Down AI Might Be Net Bad (04:35) Why Slowing Down AI Is Taboo (10:14) Why Katja Is Not Currently Giving A Talk To The UN (12:40) To Avoid An Arms Race, Do Not Accelerate Capabilities (16:27) How To Cooperate And Implement Safety Measures (21:26) Would AI Researchers Actually Accept Slowing Down AI? (29:08) Common Arguments Against Slowing Down And Their Counterarguments (36:26) To Go To The Stars, Build AGI Or Upload Your Mind (39:46) Why Katja Thinks There Is A 7% Chance Of AI Destroys The World (46:39) Why We Might End Up Building Agents (51:02) AI Impacts Answer Empirical Questions To Help Solve Important Ones (56:32) The 2022 Expert Survey on AI Progress (58:56) High Level Machine Intelligence (1:04:02) Running A Survey That Actually Collects Data (1:08:38) How AI Timelines Have Become Shorter Since 2016 (1:14:35) Are AI Researchers Still Too Optimistic? (1:18:20) AI Experts Seem To Believe In Slower Takeoffs (1:25:11) Automation and the Unequal Distributions of Cognitive power (1:34:59) The Least Impressive Thing that Cannot Be Done in 2 years (1:38:17) Final thoughts
01:41:15
September 16, 2022
Markus Anderljung–AI Policy
Markus Anderljung–AI Policy
Markus Anderljung is the Head of AI Policy at the Centre for Governance of AI  in Oxford and was previously seconded to the UK government office as a senior policy specialist. In this episode we discuss Jack Clark's AI Policy takes, answer questions about AI Policy from Twitter and explore what is happening in the AI Governance landscape more broadly. Youtube: https://youtu.be/DD303irN3ps Transcript: https://theinsideview.ai/markus Host: https://twitter.com/MichaelTrazzi Markus: https://twitter.com/manderljung OUTLINE (00:00) Highlights & Intro (00:57) Jack Clark’s AI Policy Takes: Agree or Disagree (06:57) AI Governance Takes: Answering Twitter Questions (32:07) What The Centre For the Governance Of AI Is Doing (57:38) The AI Governance Landscape (01:15:07) How The EU Is Regulating AI (01:29:28) Towards An Incentive Structure For Aligned AI
01:43:06
September 09, 2022
Alex Lawsen—Forecasting AI Progress
Alex Lawsen—Forecasting AI Progress
Alex Lawsen is an advisor at 80,000 hours, released an Introduction to Forecasting Youtube Series and has recently been thinking about forecasting AI progress, why you cannot just "update all the way bro" (discussed in my latest episode with Connor Leahy) and how to develop inside views about AI Alignment in general. Youtube: https://youtu.be/vLkasevJP5c Transcript: https://theinsideview.ai/alex  Host: https://twitter.com/MichaelTrazzi Alex: https://twitter.com/lxrjl OUTLINE (00:00) Intro (00:31) How Alex Ended Up Making Forecasting Videos (02:43) Why You Should Try Calibration Training (07:25) How Alex Upskilled In Forecasting (12:25) Why A Spider Monkey Profile Picture (13:53) Why You Cannot Just "Update All The Way Bro" (18:50) Why The Metaculus AGI Forecasts Dropped Twice (24:37) How Alex’s AI Timelines Differ From Metaculus (27:11) Maximizing Your Own Impact Using Forecasting (33:52) What Makes A Good Forecasting Question (41:59) What Motivated Alex To Develop Inside Views About AI (43:26) Trying To Pass AI Alignment Ideological Turing Tests (54:52) Why Economic Growth Curve Fitting Is Not Sufficient To Forecast AGI (01:04:10) Additional Resources
01:04:57
September 06, 2022
Robert Long–Artificial Sentience
Robert Long–Artificial Sentience
Robert Long is a research fellow at the Future of Humanity Institute. His work is at the intersection of the philosophy of AI Safety and consciousness of AI. We talk about the recent LaMDA controversy, Ilya Sutskever's slightly conscious tweet, the metaphysics and philosophy of consciousness, artificial sentience, and how a future filled with digital minds could get really weird.  Youtube: https://youtu.be/K34AwhoQhb8 Transcript: https://theinsideview.ai/roblong Host: https://twitter.com/MichaelTrazzi Robert: https://twitter.com/rgblong Robert's blog: https://experiencemachines.substack.com OUTLINE (00:00:00) Intro (00:01:11) The LaMDA Controversy (00:07:06) Defining AGI And Consciousness (00:10:30) The Slightly Conscious Tweet (00:13:16) Could Large Language Models Become Conscious? (00:18:03) Blake Lemoine Does Not Negotiate With Terrorists (00:25:58) Could We Actually Test Artificial Consciousness? (00:29:33) From Metaphysics To Illusionism (00:35:30) How We Could Decide On The Moral Patienthood Of Language Models (00:42:00) Predictive Processing, Global Workspace Theories and Integrated Information Theory (00:49:46) Have You Tried DMT? (00:51:13) Is Valence Just The Reward in Reinforcement Learning? (00:54:26) Are Pain And Pleasure Symetrical? (01:04:25) From Charismatic AI Systems to Artificial Sentience (01:15:07) Sharing The World With Digital Minds (01:24:33) Why AI Alignment Is More Pressing Than Artificial Sentience (01:39:48) Why Moral Personhood Could Require Memory (01:42:41) Last thoughts And Further Readings
01:46:43
August 28, 2022
Ethan Perez–Inverse Scaling, Language Feedback, Red Teaming
Ethan Perez–Inverse Scaling, Language Feedback, Red Teaming
Ethan Perez is a research scientist at Anthropic, working on large language models. He is the second Ethan working with large language models coming on the show but, in this episode, we discuss why alignment is actually what you need, not scale. We discuss three projects he has been pursuing before joining Anthropic, namely the Inverse Scaling Prize, Red Teaming Language Models with Language Models, and Training Language Models with Language Feedback.  Ethan Perez: https://twitter.com/EthanJPerez Transcript: https://theinsideview.ai/perez Host: https://twitter.com/MichaelTrazzi OUTLINE (00:00:00) Highlights (00:00:20) Introduction (00:01:41) The Inverse Scaling Prize (00:06:20) The Inverse Scaling Hypothesis (00:11:00) How To Submit A Solution (00:20:00) Catastrophic Outcomes And Misalignment (00:22:00) Submission Requirements (00:27:16) Inner Alignment Is Not Out Of Distribution Generalization (00:33:40) Detecting Deception With Inverse Scaling (00:37:17) Reinforcement Learning From Human Feedback (00:45:37) Training Language Models With Language Feedback (00:52:38) How It Differs From InstructGPT (00:56:57) Providing Information-Dense Feedback (01:03:25) Why Use Language Feedback (01:10:34) Red Teaming Language Models With Language Models (01:20:17) The Classifier And Advesarial Training (01:23:53) An Example Of Red-Teaming Failure (01:27:47) Red Teaming Using Prompt Engineering (01:32:58) Reinforcement Learning Methods (01:41:53) Distributional Biases (01:45:23) Chain of Thought Prompting (01:49:52) Unlikelihood Training and KL Penalty (01:52:50) Learning AI Alignment through the Inverse Scaling Prize (01:59:33) Final thoughts on AI Alignment
02:01:27
August 24, 2022
Robert Miles–Youtube, AI Progress and Doom
Robert Miles–Youtube, AI Progress and Doom
Robert Miles has been making videos for Computerphile, then decided to create his own Youtube channel about AI Safety. Lately, he's been working on  a Discord Community that uses Stampy the chatbot to answer Youtube comments. We also spend some time discussing recent AI Progress and why Rob is not that optimistic about humanity's survival. Transcript: https://theinsideview.ai/rob Youtube: https://youtu.be/DyZye1GZtfk Host: https://twitter.com/MichaelTrazzi Rob: https://twitter.com/robertskmiles OUTLINE (00:00:00) Intro (00:02:25) Youtube (00:28:30) Stampy (00:51:24) AI Progress (01:07:43) Chatbots (01:26:10) Avoiding Doom (01:59:34) Formalising AI Alignment (02:14:40) AI Timelines (02:25:45) Regulations (02:40:22) Rob’s new channel
02:51:16
August 19, 2022
Connor Leahy–EleutherAI, Conjecture
Connor Leahy–EleutherAI, Conjecture
Connor was the first guest of this podcast. In the last episode, we talked a lot about EleutherAI, a grassroot collective of researchers he co-founded, who open-sourced GPT-3 size models such as GPT-NeoX and GPT-J.  Since then, Connor co-founded Conjecture, a company aiming to make AGI safe through scalable AI Alignment research. One of the goals of Conjecture is to reach a fundamental understanding of the internal mechanisms of current deep learning models using interpretability techniques.  In this episode, we go through the famous AI Alignment compass memes, discuss Connor’s inside views about AI progress, how he approaches AGI forecasting, his takes on Eliezer Yudkowsky’s secret strategy, common misconceptions and EleutherAI, and why you should consider working for his new company Conjecture. youtube: https://youtu.be/Oz4G9zrlAGs transcript: https://theinsideview.ai/connor2 twitter: https:/twitter.com/MichaelTrazzi OUTLINE (00:00) Highlights (01:08) AGI Meme Review  (13:36) Current AI Progress (25:43) Defining AG (34:36) AGI Timelines (55:34) Death with Dignity (01:23:00) EleutherAI (01:46:09) Conjecture (02:43:58) Twitter Q&A
02:57:19
July 22, 2022
Raphaël Millière Contra Scaling Maximalism
Raphaël Millière Contra Scaling Maximalism
Raphaël Millière is a Presidential Scholar in Society and Neuroscience at Columbia University. He has previously completed a PhD in philosophy in Oxford, is interested in the philosophy of mind, cognitive science, and artificial intelligence, and has recently been discussing at length the current progress in AI with popular Twitter threads on GPT-3, Dalle-2 and a thesis he called “scaling maximalism”. Raphaël is also co-organizing with Gary Marcus a workshop about compositionality in AI at the end of the month.  Transcript: https://theinsideview.ai/raphael Video: https://youtu.be/2EHWzK10kvw Host: https://twitter.com/MichaelTrazzi Raphaël : https://twitter.com/raphaelmilliere  Workshop: https://compositionalintelligence.github.io  Outline (00:36) definitions of artificial general intelligence (7:25) behavior correlates of intellience, chinese room (19:11) natural language understanding, the octopus test, linguistics, semantics (33:05) generating philosophy with GPT-3, college essays grades, bullshit (42:45) Stochastic Chameleon, out of distribution generalization (51:19) three levels of generalization, the Wozniak test (59:38) AI progress spectrum, scaling maximalism (01:15:06) Bitter Lesson (01:23:08) what would convince him that scale is all we need (01:27:04) unsupervised learning, lifelong learning (01:35:33) goalpost moving (01:43:30) what researchers "should" be doing, nuclear risk, climate change (01:57:24) compositionality, structured representations (02:05:57) conceptual blending, complex syntactic structure, variable binding (02:11:51) Raphaël's experience with DALL-E (02:19:02) the future of image generation
02:27:12
June 24, 2022
Blake Richards–AGI Does Not Exist
Blake Richards–AGI Does Not Exist
Blake Richards is an Assistant Professor in the Montreal Neurological Institute and the School of Computer Science at McGill University and a Core Faculty Member at MiLA. He thinks that AGI is not a coherent concept, which is why he ended up on a recent AGI political compass meme. When people asked on Twitter who was the edgiest people at MiLA, his name got actually more likes than Ethan, so  hopefully, this podcast will help re-establish the truth. Transcript: https://theinsideview.ai/blake Video: https://youtu.be/kWsHS7tXjSU Outline: (01:03) Highlights (01:03) AGI good / AGI not now compass (02:25) AGI is not a coherent concept (05:30) you cannot build truly general AI (14:30) no "intelligence" threshold for AI (25:24) benchmarking intelligence (28:34) recursive self-improvement (34:47) scale is something you need (37:20) the bitter lesson is only half-true (41:32) human-like sensors for general agents (44:06) the credit assignment problem (49:50) testing for backpropagation in the brain (54:42) burstprop (bursts of action potentials), reward prediction errors (01:01:35) long-term credit-assignment in reinforcement learning (01:10:48) what would change his mind on scaling and existential risk
01:15:32
June 14, 2022
Ethan Caballero–Scale is All You Need
Ethan Caballero–Scale is All You Need
Ethan is known on Twitter as the edgiest person at MILA. We discuss all the gossips around scaling large language models in what will be later known as the Edward Snowden moment of Deep Learning. On his free time, Ethan is a Master’s degree student at MILA in Montreal, and has published papers on out of distribution generalization and robustness generalization, accepted both as oral presentations and spotlight presentations at ICML and NeurIPS. Ethan has recently been thinking about scaling laws, both as an organizer and speaker for the 1st Neural Scaling Laws Workshop. Transcript: https://theinsideview.github.io/ethan Youtube: https://youtu.be/UPlv-lFWITI Michaël: https://twitter.com/MichaelTrazzi Ethan: https://twitter.com/ethancaballero Outline (00:00) highlights (00:50) who is Ethan, scaling laws T-shirts (02:30) scaling, upstream, downstream, alignment and AGI (05:58) AI timelines, AlphaCode, Math scaling, PaLM (07:56) Chinchilla scaling laws (11:22) limits of scaling, Copilot, generative coding, code data (15:50) Youtube scaling laws, constrative type thing (20:55) AGI race, funding, supercomputers (24:00) Scaling at Google (25:10) gossips, private research, GPT-4 (27:40) why Ethan was did not update on PaLM, hardware bottleneck (29:56) the fastest path, the best funding model for supercomputers (31:14) EA, OpenAI, Anthropics, publishing research, GPT-4 (33:45) a zillion language model startups from ex-Googlers (38:07) Ethan's journey in scaling, early days (40:08) making progress on an academic budget, scaling laws research (41:22) all alignment is inverse scaling problems (45:16) predicting scaling laws, useful ai alignment research (47:16) nitpicks aobut Ajeya Cotra's report, compute trends (50:45) optimism, conclusion on alignment
51:54
May 05, 2022
10. Peter Wildeford on Forecasting
10. Peter Wildeford on Forecasting
Peter is the co-CEO of Rethink Priorities, a fast-growing non-profit doing research on how to improve the long-term future. On his free time, Peter makes money in prediction markets and is quickly becoming one of the top forecasters on Metaculus. We talk about the probability of London getting nuked, Rethink Priorities and why EA should fund projects that scale. Check out the video and transcript here: https://theinsideview.github.io/peter
51:43
April 13, 2022
9. Emil Wallner on Building a €25000 Machine Learning Rig
9. Emil Wallner on Building a €25000 Machine Learning Rig
Emil is a resident at the Google Arts & Culture Lab were he explores the intersection between art and machine learning. He recently built his own Machine Learning server, or rig, which costed him €25000. Emil's Story: https://www.emilwallner.com/p/ml-rig Youtube: https://youtu.be/njbPpxhE6W0 00:00 Intro 00:23 Building your own rig 06:11 The Nvidia GPU rder hack 15:51 Inside Emil's rig 21:31 Motherboard 23:55 Cooling and datacenters 29:36 Deep Learning lessons from owning your hardware 36:20 Shared resources vs. personal GPUs 39:12 RAM, chassis and airflow 42:42 Amd, Apple, Arm and Nvidia 51:15 Tensorflow, TPUs, cloud minsdet, EleutherAI
56:41
March 23, 2022
8. Sonia Joseph on NFTs, Web 3 and AI Safety
8. Sonia Joseph on NFTs, Web 3 and AI Safety
Sonia is a graduate student applying ML to neuroscience at MILA. She was previously applying deep learning to neural data at Janelia, an NLP research engineer at a startup and graduated in computational neuroscience at Princeton University.  Anonymous feedback: https://app.suggestionox.com/r/xOmqTW  Twitter: https://twitter.com/MichaelTrazzi  Sonia's December update: https://t.co/z0GRqDTnWm Sonia's Twitter: https://twitter.com/soniajoseph_  Orthogonality Thesis: https://www.youtube.com/watch?v=hEUO6pjwFOo Paperclip game: https://www.decisionproblem.com/paperclips/ Ngo & Yudkowsky on feedback loops: https://bit.ly/3ml0zFL  Outline  00:00 Intro 01:06 NFTs 03:38 Web 3 21:12 Digital Copy 29:09 ML x Neuroscience 43:44 Limits of the Orthogonality Thesis 01:01:25 Goal of perpetuating Information 01:08:14 Compressing information 01:10:52 Feedback loops are not safe 01:17:43 Another AI Safety aesthetic 01:23:46 Meaning of life
01:25:36
December 22, 2021
7. Phil Trammell on Economic Growth under Transformative AI
7. Phil Trammell on Economic Growth under Transformative AI
Phil Trammell is an Oxford PhD student in economics and research associate at the Global Priorities Institute. Phil is one of the smartest person I know, when considering the intersection of the long-term future and economic growth. Funnily enough, Phil was my roomate, a few years ago in Oxford, and last time I called him he casually said that he had written an extensive report on the econ of AI. A few weeks ago, I decided that I would read that report (which actually is a literature review), and that I would translate everything that I learn along the way to diagrams, so you too can learn what’s inside that paper. The video covers everything from MacroEconomics 101 to self-improving AI in about 30-ish diagrams. paper: https://globalprioritiesinstitute.org/wp-content/uploads/Philip-Trammell-and-Anton-Korinek_economic-growth-under-transformative-ai.pdf video: https://youtu.be/2GCNmmDrRsk slides: https://www.canva.com/design/DAErBy0hqfQ/sVy6XJmgtJ_cYrGS87_uhw/view Outline: - 00:00 Podcast intro - 01:19 Phil's intro - 08:58 What's GDP - 13:42 Decreasing growth - 15:40 Permanent growth increase - 19:02 Singularity of type I - 22:58 Singularity of type II - 23:24 Production function - 24:10 The Economy as a two-tubes factory - 25:09 Marginal Products of labor/capital - 27:48 Labor/capital-augmenting technology - 29:13 Technological progress since Ford - 38:18 Factor payments - 41:30 Elasticity of substitution - 48:34 Production function with substitution - 53:18 Perfect substitutability - 54:00 Perfect complements - 55:44 Exogenous growth - 59:56 How to get long-run growth - 01:05:40 Endogenous growth - 01:10:40 The research feedback parameter - 01:17:35 AI as an imperfect substitute for human labor - 01:25:25 A simple model for perfect substitution - 01:33:09 AI as a perfect substitute - 01:36:07 Substitutability in robotics production - 01:40:43 OpenAI automating coding - 01:44:38 Growth impacts via impacts on savings - 01:46:44 AI in task-based models of good productions - 01:53:26 AI in technology production - 02:03:55 Limits of the econ model - 02:09:00 Conclusion
02:09:54
October 24, 2021
6. Slava Bobrov on Brain Computer Interfaces
6. Slava Bobrov on Brain Computer Interfaces
In this episode I discuss Brain Computer Interfaces with Slava Bobrov, a self-taught Machine Learning Engineer applying AI to neural biosignals to control robotic limbs. This episode will be of special interest to you if you're an engineer who wants to get started with brain computer interfaces, or just broadly interested in how this technology could enhance human intelligence. Fun fact: most of the questions I asked were sent by my Twitter followers, or come from a Discord I co-created on Brain Computer Interfaces. So if you want your questions to be on the next video or you're genuinely interested in this topic, you can find links for both my Twitter and our BCI discord in the description. Outline: 00:00 introduction 00:49 defining brain computer interfaces (BCI) 03:35 Slava's work on prosthetic hands 09:16 different kinds of BCI 11:42 BCI companies: Muse, Open BCI 16:26 what Kernel is doing (fNIRS) 20:24 EEG vs. EMG—the stadium metaphor 25:26 can we build "safe" BCIs? 29:32 would you want a Facebook BCI? 33:40 OpenAI Codex is a BCI 38:04 reward prediction in the brain 44:04 what Machine Learning project for BCI? 48:27 Slava's sleep tracking 51:55 patterns  in recorded sleep signal 54:56 lucid dreaming 56:51 the long-term future of BCI 59:57 are they diminishing returns in BCI/AI investments? 01:03:45 heterogeneity in intelligence after BCI/AI progress 01:06:30 is our communication improving? is BCI progress fast enough? 01:12:30 neuroplasticity, Neuralink 01:16:08 siamese twins with BCI, the joystick without screen experiment 01:20:50 Slava's vision for a "brain swarm" 01:23:23 language becoming obsolete, Twitter swarm 01:26:16 brain uploads vs. copies 01:29:32 would a copy be actually you? 01:31:30 would copies be a success for humanity? 01:34:38 shouldn't we change humanity's reward function? 01:37:54 conclusion
01:39:45
October 06, 2021
5. Charlie Snell on DALL-E and CLIP
5. Charlie Snell on DALL-E and CLIP
We talk about AI generated art with Charlie Snell, a Berkeley student who wrote extensively about AI art for ML@Berkeley's blog (https://ml.berkeley.edu/blog/). We look at multiple slides with art throughout our conversation, so I highly recommend watching the video (https://www.youtube.com/watch?v=gcwidpxeAHI). In the first part we go through Charlie's explanations of DALL-E, a model trained end-to-end by OpenAI to generate images from prompts. We then talk about CLIP + VQGAN, where CLIP is another model by OpenAI matching prompts and images, and VQGAN is a state-of-the art GAN used extensively in the AI Art scene. At the end of the video we look at different pieces of art made using CLIP, including tricks for using VQGAN with CLIP, videos, and the latest CLIP-guided diffusion architecture. At the end of our chat we talk about scaling laws and how progress in art relates to other advances in ML.
02:53:28
September 16, 2021
4. Sav Sidorov on Learning, Contrarianism and Robotics
4. Sav Sidorov on Learning, Contrarianism and Robotics
I interview Sav Sidorov about top-down learning, contrarianism, religion, university, robotics, ego , education, twitter, friends, psychedelics, B-values and beauty. Highlights & Transcript: https://insideview.substack.com/p/sav Watch the video: https://youtu.be/_Y6_TakG3d0
03:06:48
September 05, 2021
3. Evan Hubinger on Takeoff speeds, Risks from learned optimization & Interpretability
3. Evan Hubinger on Takeoff speeds, Risks from learned optimization & Interpretability
We talk about Evan’s background @ MIRI & OpenAI, Coconut, homogeneity in AI takeoff, reproducing SoTA & openness in multipolar scenarios, quantilizers & operationalizing strategy stealing, Risks from learned optimization & evolution, learned optimization in Machine Learning, clarifying Inner AI Alignment terminology, transparency & interpretability, 11 proposals for safe advanced AI, underappreciated problems in AI Alignment & surprising advances in AI.
01:44:25
June 08, 2021
2. Connor Leahy on GPT3, EleutherAI and AI Alignment
2. Connor Leahy on GPT3, EleutherAI and AI Alignment
In the first part of the podcast we chat about how to speed up GPT-3 training, how Conor updated on recent announcements of large language models, why GPT-3 is AGI for some specific definitions of AGI [1], the obstacles in plugging planning to GPT-N and why the brain might approximate something like backprop. We end this first chat with solomonoff priors [2], adversarial attacks such as Pascal Mugging [3], and whether direct work on AI Alignment is currently tractable. In the second part, we chat about his current projects at EleutherAI [4][5], multipolar scenarios and reasons to work on technical AI Alignment research. [1] https://youtu.be/HrV19SjKUss?t=4785 [2] https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference [3] https://www.lesswrong.com/posts/a5JAiTdytou3Jg749/pascal-s-mugging-tiny-probabilities-of-vast-utilities [4] https://www.eleuther.ai/ [5] https://discord.gg/j65dEVp5
01:28:46
May 04, 2021
1. Does the world really need another podcast?
1. Does the world really need another podcast?
In this first episode I'm the one being interviewed. Questions: - Does the world really needs another podcast? - Why call your podcast superintelligence? - What is the Inside view? The Outside view? - What could be the impact of podcast conversations? - Why would a public discussion on superintelligence be different? - What are the main reasons we listen to podcasts at all? - Explaining GPT-3 and how we could scale to GPT-4 - Could GPT-N write a PhD thesis? - What would a superintelligence need on top of text prediction? - Can we just accelerate human-level common sense to get superintelligence?
25:55
April 25, 2021