Skip to main content
Generally Intelligent

Generally Intelligent

By Kanjun Qiu

Technical discussions with deep learning researchers who study how to build intelligence. Made for researchers, by researchers.
Available on
Amazon Music Logo
Apple Podcasts Logo
Castbox Logo
Google Podcasts Logo
iHeartRadio Logo
Overcast Logo
Pocket Casts Logo
RadioPublic Logo
Spotify Logo
Currently playing episode

Episode 34: Seth Lazar, Australian National University: On legitimate power, moral nuance, and the political philosophy of AI

Generally IntelligentMar 12, 2024

00:00
01:55:46
Episode 34: Seth Lazar, Australian National University: On legitimate power, moral nuance, and the political philosophy of AI

Episode 34: Seth Lazar, Australian National University: On legitimate power, moral nuance, and the political philosophy of AI

Seth Lazar is a professor of philosophy at the Australian National University, where he leads the Machine Intelligence and Normative Theory (MINT) Lab. His unique perspective bridges moral and political philosophy with AI, introducing much-needed rigor to the question of what will make for a good and just AI future.

Generally Intelligent is a podcast by Imbue where we interview researchers about their behind-the-scenes ideas, opinions, and intuitions that are hard to share in papers and talks.

About Imbue
Imbue is an independent research company developing AI agents that mirror the fundamentals of human-like intelligence and that can learn to safely solve problems in the real world. We started Imbue because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.

Website: https://imbue.com/
LinkedIn: https://www.linkedin.com/company/imbue-ai/
Twitter: @imbue_ai


Mar 12, 202401:55:46
Episode 33: Tri Dao, Stanford: On FlashAttention and sparsity, quantization, and efficient inference

Episode 33: Tri Dao, Stanford: On FlashAttention and sparsity, quantization, and efficient inference

Tri Dao is a PhD student at Stanford, co-advised by Stefano Ermon and Chris Re. He’ll be joining Princeton as an assistant professor next year. He works at the intersection of machine learning and systems, currently focused on efficient training and long-range context.


About Generally Intelligent 

We started Generally Intelligent because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.  

We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.  

Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.  


Learn more about us

Website: https://generallyintelligent.com/

LinkedIn: linkedin.com/company/generallyintelligent/ 

Twitter: @genintelligent

Aug 09, 202301:20:30
Episode 32: Jamie Simon, UC Berkeley: On theoretical principles for how neural networks learn and generalize

Episode 32: Jamie Simon, UC Berkeley: On theoretical principles for how neural networks learn and generalize

Jamie Simon is a 4th year Ph.D. student at UC Berkeley advised by Mike DeWeese, and also a Research Fellow with us at Generally Intelligent. He uses tools from theoretical physics to build fundamental understanding of deep neural networks so they can be designed from first-principles. In this episode, we discuss reverse engineering kernels, the conservation of learnability during training, infinite-width neural networks, and much more.

About Generally Intelligent 

We started Generally Intelligent because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.  

We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.  

Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.  


Learn more about us

Website: https://generallyintelligent.com/

LinkedIn: linkedin.com/company/generallyintelligent/ 

Twitter: @genintelligent

Jun 22, 202301:01:55
Episode 31: Bill Thompson, UC Berkeley, on how cultural evolution shapes knowledge acquisition

Episode 31: Bill Thompson, UC Berkeley, on how cultural evolution shapes knowledge acquisition

Bill Thompson is a cognitive scientist and an assistant professor at UC Berkeley. He runs an experimental cognition laboratory where he and his students conduct research on human language and cognition using large-scale behavioral experiments, computational modeling, and machine learning. In this episode, we explore the impact of cultural evolution on human knowledge acquisition, how pure biological evolution can lead to slow adaptation and overfitting, and much more.


About Generally Intelligent 

We started Generally Intelligent because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.  

We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.  

Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.  


Learn more about us

Website: https://generallyintelligent.com/

LinkedIn: linkedin.com/company/generallyintelligent/ 

Twitter: @genintelligent

Mar 29, 202301:15:25
Episode 30: Ben Eysenbach, CMU, on designing simpler and more principled RL algorithms

Episode 30: Ben Eysenbach, CMU, on designing simpler and more principled RL algorithms

Ben Eysenbach is a PhD student from CMU and a student researcher at Google Brain. He is co-advised by Sergey Levine and Ruslan Salakhutdinov and his research focuses on developing RL algorithms that get state-of-the-art performance while being more simple, scalable, and robust. Recent problems he’s tackled include long horizon reasoning, exploration, and representation learning. In this episode, we discuss designing simpler and more principled RL algorithms, and much more.

About Generally Intelligent 

We started Generally Intelligent because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.  

We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.  

Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.  


Learn more about us

Website: https://generallyintelligent.com/

LinkedIn: linkedin.com/company/generallyintelligent/ 

Twitter: @genintelligent

Mar 23, 202301:45:56
Episode 29: Jim Fan, NVIDIA, on foundation models for embodied agents, scaling data, and why prompt engineering will become irrelevant

Episode 29: Jim Fan, NVIDIA, on foundation models for embodied agents, scaling data, and why prompt engineering will become irrelevant

Jim Fan is a research scientist at NVIDIA and got his PhD at Stanford under Fei-Fei Li. Jim is interested in building generally capable autonomous agents, and he recently published MineDojo, a massively multiscale benchmarking suite built on Minecraft, which was an Outstanding Paper at NeurIPS. In this episode, we discuss the foundation models for embodied agents, scaling data, and why prompt engineering will become irrelevant.  


About Generally Intelligent 

We started Generally Intelligent because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.  

We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.  

Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.  


Learn more about us

Website: https://generallyintelligent.com/

LinkedIn: linkedin.com/company/generallyintelligent/ 

Twitter: @genintelligent

Mar 09, 202301:26:46
Episode 28: Sergey Levine, UC Berkeley, on the bottlenecks to generalization in reinforcement learning, why simulation is doomed to succeed, and how to pick good research problems

Episode 28: Sergey Levine, UC Berkeley, on the bottlenecks to generalization in reinforcement learning, why simulation is doomed to succeed, and how to pick good research problems

Sergey Levine, an assistant professor of EECS at UC Berkeley, is one of the pioneers of modern deep reinforcement learning. His research focuses on developing general-purpose algorithms for autonomous agents to learn how to solve any task. In this episode, we talk about the bottlenecks to generalization in reinforcement learning, why simulation is doomed to succeed, and how to pick good research problems.

Mar 01, 202301:34:49
Episode 27: Noam Brown, FAIR, on achieving human-level performance in poker and Diplomacy, and the power of spending compute at inference time

Episode 27: Noam Brown, FAIR, on achieving human-level performance in poker and Diplomacy, and the power of spending compute at inference time

Noam Brown is a research scientist at FAIR. During his Ph.D. at CMU, he made the first AI to defeat top humans in No Limit Texas Hold 'Em poker. More recently, he was part of the team that built CICERO which achieved human-level performance in Diplomacy. In this episode, we extensively discuss ideas underlying both projects, the power of spending compute at inference time, and much more.

Feb 09, 202301:44:55
Episode 26: Sugandha Sharma, MIT, on biologically inspired neural architectures, how memories can be implemented, and control theory

Episode 26: Sugandha Sharma, MIT, on biologically inspired neural architectures, how memories can be implemented, and control theory

Sugandha Sharma is a Ph.D. candidate at MIT advised by Prof. Ila Fiete and Prof. Josh Tenenbaum. She explores the computational and theoretical principles underlying higher cognition in the brain by constructing neuro-inspired models and mathematical tools to discover how the brain navigates the world, or how to construct memory mechanisms that don’t exhibit catastrophic forgetting. In this episode, we chat about biologically inspired neural architectures, how memory could be implemented, why control theory is underrated and much more.

Jan 17, 202301:44:01
Episode 25: Nicklas Hansen, UCSD, on long-horizon planning and why algorithms don't drive research progress

Episode 25: Nicklas Hansen, UCSD, on long-horizon planning and why algorithms don't drive research progress

Nicklas Hansen is a Ph.D. student at UC San Diego advised by Prof Xiaolong Wang and Prof Hao Su. He is also a student researcher at Meta AI. Nicklas' research interests involve developing machine learning systems, specifically neural agents, that have the ability to learn, generalize, and adapt over their lifetime. In this episode, we talk about long-horizon planning, adapting reinforcement learning policies during deployment, why algorithms don't drive research progress, and much more!

Dec 16, 202201:49:19
Episode 24: Jack Parker-Holder, DeepMind, on open-endedness, evolving agents and environments, online adaptation, and offline learning

Episode 24: Jack Parker-Holder, DeepMind, on open-endedness, evolving agents and environments, online adaptation, and offline learning

Jack Parker-Holder recently joined DeepMind after his Ph.D. with Stephen Roberts at Oxford. Jack is interested in using reinforcement learning to train generally capable agents, especially via an open-ended learning process where environments can adapt to constantly challenge the agent's capabilities. Before doing his Ph.D., Jack worked for 7 years in finance at JP Morgan. In this episode, we chat about open-endedness, evolving agents and environments, online adaptation, offline learning with world models, and much more.

Dec 06, 202201:56:43
Episode 23: Celeste Kidd, UC Berkeley, on attention and curiosity, how we form beliefs, and where certainty comes from

Episode 23: Celeste Kidd, UC Berkeley, on attention and curiosity, how we form beliefs, and where certainty comes from

Celeste Kidd is a professor of psychology at UC Berkeley. Her lab studies the processes involved in knowledge acquisition; essentially, how we form our beliefs over time and what allows us to select a subset of all the information we encounter in the world to form those beliefs. In this episode, we chat about attention and curiosity, beliefs and expectations, where certainty comes from, and much more.

Nov 22, 202201:52:35
Episode 22: Archit Sharma, Stanford, on unsupervised and autonomous reinforcement learning

Episode 22: Archit Sharma, Stanford, on unsupervised and autonomous reinforcement learning

Archit Sharma is a Ph.D. student at Stanford advised by Chelsea Finn. His recent work is focused on autonomous deep reinforcement learning—that is, getting real world robots to learn to deal with unseen situations without human interventions. Prior to this, he was an AI resident at Google Brain and he interned with Yoshua Bengio at Mila. In this episode, we chat about unsupervised, non-episodic, autonomous reinforcement learning and much more.

Nov 17, 202201:38:14
Episode 21: Chelsea Finn, Stanford, on the biggest bottlenecks in robotics and reinforcement learning

Episode 21: Chelsea Finn, Stanford, on the biggest bottlenecks in robotics and reinforcement learning

Chelsea Finn is an Assistant Professor at Stanford and part of the Google Brain team. She's interested in the capability of robots and other agents to develop broadly intelligent behavior through learning and interaction at scale. In this episode, we chat about some of the biggest bottlenecks in RL and robotics—including distribution shifts, Sim2Real, and sample efficiency—as well as what makes a great researcher, why she aspires to build a robot that can make cereal, and much more.

Nov 03, 202240:08
Episode 20: Hattie Zhou, Mila, on supermasks, iterative learning, and fortuitous forgetting

Episode 20: Hattie Zhou, Mila, on supermasks, iterative learning, and fortuitous forgetting

Hattie Zhou is a Ph.D. student at Mila working with Hugo Larochelle and Aaron Courville. Her research focuses on understanding how and why neural networks work, starting with deconstructing why lottery tickets work and most recently exploring how forgetting may be fundamental to learning. Prior to Mila, she was a data scientist at Uber and did research with Uber AI Labs. In this episode, we chat about supermasks and sparsity, coherent gradients, iterative learning, fortuitous forgetting, and much more.

Oct 14, 202201:47:29
Episode 19: Minqi Jiang, UCL, on environment and curriculum design for general RL agents

Episode 19: Minqi Jiang, UCL, on environment and curriculum design for general RL agents

Minqi Jiang is a Ph.D. student at UCL and FAIR, advised by Tim Rocktäschel and Edward Grefenstette. Minqi is interested in how simulators can enable AI agents to learn useful behaviors that generalize to new settings. He is especially focused on problems at the intersection of generalization, human-AI coordination, and open-ended systems. In this episode, we chat about environment and curriculum design for reinforcement learning, model-based RL, emergent communication, open-endedness, and artificial life.

Jul 19, 202201:53:59
Episode 18: Oleh Rybkin, UPenn, on exploration and planning with world models

Episode 18: Oleh Rybkin, UPenn, on exploration and planning with world models

Oleh Rybkin is a Ph.D. student at the University of Pennsylvania and a student researcher at Google. He is advised by Kostas Daniilidis and Sergey Levine. Oleh's research focus is on reinforcement learning, particularly unsupervised and model-based RL in the visual domain. In this episode, we discuss agents that explore and plan (and do yoga), how to learn world models from video, what's missing from current RL research, and much more!

Jul 11, 202202:00:41
Episode 17: Andrew Lampinen, DeepMind, on symbolic behavior, mental time travel, and insights from psychology

Episode 17: Andrew Lampinen, DeepMind, on symbolic behavior, mental time travel, and insights from psychology

Andrew Lampinen is a Research Scientist at DeepMind. He previously completed his Ph.D. in cognitive psychology at Stanford. In this episode, we discuss generalization and transfer learning, how to think about language and symbols, what AI can learn from psychology (and vice versa), mental time travel, and the need for more human-like tasks. [Podcast errata: Susan Goldin-Meadow accidentally referred to as Susan Gelman @00:30:34] 

Feb 28, 202201:59:08
Episode 16: Yilun Du, MIT, on energy-based models, implicit functions, and modularity

Episode 16: Yilun Du, MIT, on energy-based models, implicit functions, and modularity

Yilun Du is a graduate student at MIT advised by Professors Leslie Kaelbling, Tomas Lozano-Perez, and Josh Tenenbaum. He's interested in building robots that can understand the world like humans and construct world representations that enable task planning over long horizons.

Dec 21, 202101:24:51
Episode 15: Martín Arjovsky, INRIA, on benchmarks for robustness and geometric information theory

Episode 15: Martín Arjovsky, INRIA, on benchmarks for robustness and geometric information theory

Martín Arjovsky did his Ph.D. at NYU with Leon Bottou. Some of his well-known works include the Wasserstein GAN and a paradigm called Invariant Risk Minimization. In this episode, we discuss out-of-distribution generalization, geometric information theory, and the importance of good benchmarks.

Oct 15, 202101:26:13
Episode 14: Yash Sharma, MPI-IS, on generalizability, causality, and disentanglement

Episode 14: Yash Sharma, MPI-IS, on generalizability, causality, and disentanglement

Yash Sharma is a Ph.D. student at the International Max Planck Research School for Intelligent Systems. He previously studied electrical engineering at Cooper Union and has spent time at Borealis AI and IBM Research. Yash’s early work was on adversarial examples and his current research interests span a variety of topics in representation disentanglement. In this episode, we discuss robustness to adversarial examples, causality vs. correlation in data, and how to make deep learning models generalize better.

Sep 24, 202101:26:33
Episode 13: Jonathan Frankle, MIT, on the lottery ticket hypothesis and the science of deep learning

Episode 13: Jonathan Frankle, MIT, on the lottery ticket hypothesis and the science of deep learning

Jonathan Frankle (Google Scholar) (Website) is finishing his PhD at MIT, advised by Michael Carbin. His main research interest is using experimental methods to understand the behavior of neural networks. His current work focuses on finding sparse, trainable neural networks.

**Highlights from our conversation:** 

🕸  "Why is sparsity everywhere? This isn't an accident."

🤖  "If I gave you 500 GPUs, could you actually keep those GPUs busy?"

📊  "In general, I think we have a crisis of science in ML."

Sep 10, 202101:20:33
Episode 12: Jacob Steinhardt, UC Berkeley, on machine learning safety, alignment and measurement

Episode 12: Jacob Steinhardt, UC Berkeley, on machine learning safety, alignment and measurement

Jacob Steinhardt (Google Scholar) (Website) is an assistant professor at UC Berkeley.  His main research interest is in designing machine learning systems that are reliable and aligned with human values.  Some of his specific research directions include robustness, rewards specification and reward hacking, as well as scalable alignment.

Highlights:

📜“Test accuracy is a very limited metric.”

👨‍👩‍👧‍👦“You might not be able to get lots of feedback on human values.”

📊“I’m interested in measuring the progress in AI capabilities.”

Jun 18, 202159:39
Episode 11: Vincent Sitzmann, MIT, on neural scene representations for computer vision and more general AI

Episode 11: Vincent Sitzmann, MIT, on neural scene representations for computer vision and more general AI

Vincent Sitzmann (Google Scholar) (Website) is a postdoc at MIT. His work is on neural scene representations in computer vision.  Ultimately, he wants to make representations that AI agents can use to solve the same visual tasks humans solve regularly, but that are currently impossible for AI.

**Highlights from our conversation:**

👁 “Vision is about the question of building representations”

🧠 “We (humans) likely have a 3D inductive bias”

🤖 “All computer vision should be 3D computer vision.  Our world is a 3d world.”

May 20, 202101:10:10
Episode 10: Dylan Hadfield-Menell, UC Berkeley/MIT, on the value alignment problem in AI

Episode 10: Dylan Hadfield-Menell, UC Berkeley/MIT, on the value alignment problem in AI

Dylan Hadfield-Menell (Google Scholar) (Website) recently finished his PhD at UC Berkeley and is starting as an assistant professor at MIT. He works on the problem of designing AI algorithms that pursue the intended goal of their users, designers, and society in general.  This is known as the value alignment problem.


Highlights from our conversation:

👨‍👩‍👧‍👦 How to align AI to human values

📉 Consequences of misaligned AI -> bias & misdirected optimization

📱 Better AI recommender systems

May 12, 202101:31:33
Episode 09: Drew Linsley, Brown, on inductive biases for vision and generalization

Episode 09: Drew Linsley, Brown, on inductive biases for vision and generalization

Drew Linsley (Google Scholar) is a Paul J. Salem senior research associate at Brown, advised by Thomas Serre. He is working on building computational models of the visual system that serve the dual purpose of (1) explaining biological function and (2) extending artificial vision.

Highlights from our conversation:

🧠 Building neural-inspired inductive biases into computer vision

🖼 A learning algorithm to improve recurrent vision models (C-RBP)

🤖 Creating new benchmarks to move towards generalization

Apr 02, 202101:11:43
Episode 08: Giancarlo Kerg, Mila, on approaching deep learning from mathematical foundations

Episode 08: Giancarlo Kerg, Mila, on approaching deep learning from mathematical foundations

Giancarlo Kerg (Google Scholar) is a PhD student at Mila, supervised by Yoshua Bengio and Guillaume Lajoie.  He is working on out-of-distribution generalization and modularity in memory-augmented neural networks. 

Highlights from our conversation:

🧮 Pure math foundations as an approach to progress and structural understanding in deep learning research

🧠 How a formal proof on the way self-attention mitigates gradient vanishing when capturing long-term dependencies in RNNs led to a relevancy screening mechanism resembling human memory consolidation

🎯 Out-of-distribution generalization through modularity and inductive biases

Mar 27, 202101:09:20
Episode 07: Yujia Huang, Caltech, on neuro-inspired generative models

Episode 07: Yujia Huang, Caltech, on neuro-inspired generative models

Yujia Huang (Website) is a PhD student at Caltech, working at the intersection of deep learning and neuroscience.  She worked on optics and biophotonics before venturing into machine learning. Now, she hopes to design “less artificial” artificial intelligence.

Highlights from our conversation:

🏗 How recurrent generative feedback, a neuro-inspired design, improves adversarial robustness and and can be more efficient (less labels)

🧠 Adapting theories from neuroscience and classical research for machine learning

📊 What a new Turing test for “less artificial” or generalized AI could look like

💡 Tips for new machine learning researchers!

Mar 18, 202101:05:12
Episode 06: Julian Chibane, MPI-INF, on 3D reconstruction using implicit functions

Episode 06: Julian Chibane, MPI-INF, on 3D reconstruction using implicit functions

Julian Chibane (Google Scholar) is a PhD student at the Real Virtual Humans group at the Max Planck Institute for Informatics in Germany.  His recent work centers around intrinsic functions for 3D reconstruction.

Highlights from our conversation:

🖼 How, surprisingly, the IF-Net architecture learned reasonable representations of humans & objects without priors

🔢 A simple observation that led to Neural Unsigned Distance Fields, which handle 3D scenes without a clear inside vs. outside (most scenes!)

📚 Navigating open questions in 3D representation, and the importance of focusing on what's working

Mar 05, 202149:09
Episode 05: Katja Schwarz, MPI-IS, on GANs, implicit functions, and 3D scene understanding

Episode 05: Katja Schwarz, MPI-IS, on GANs, implicit functions, and 3D scene understanding

Katja Schwartz came to machine learning from physics, and is now working on 3D geometric scene understanding at the Max Planck Institute for Intelligent Systems. Her most recent work, “Generative Radiance Fields for 3D-Aware Image Synthesis,” revealed that radiance fields are a powerful representation for generative image synthesis, leading to 3D consistent models that render with high fidelity.

We discuss the ideas in Katja’s work and more:

🥦 the role 3D generation plays in conceptual understanding

📝 tons of practical tips on GAN training

〰 continuous functions as representations for 3D objects

Feb 24, 202150:34
Episode 04: Joel Lehman, OpenAI, on evolution, open-endedness, and reinforcement learning

Episode 04: Joel Lehman, OpenAI, on evolution, open-endedness, and reinforcement learning

Joel Lehman was previously a founding member at Uber AI Labs and assistant professor at the IT University of Copenhagen. He's now a research scientist at OpenAI, where he focuses on open-endedness, reinforcement learning, and AI safety.

Joel’s PhD dissertation introduced the novelty search algorithm. That work inspired him to write the popular science book, “Why Greatness Cannot Be Planned”, with his PhD advisor Ken Stanley, which discusses what evolutionary algorithms imply for how individuals and society should think about objectives.

We discuss this and much more:

- How discovering novelty search totally changed Joel’s philosophy of life

- Sometimes, can you reach your objective more quickly by not trying to reach it?

- How one might evolve intelligence

- Why reinforcement learning is a natural framework for open-endedness

Feb 17, 202101:18:12
Episode 03: Cinjon Resnick, NYU, on activity and scene understanding

Episode 03: Cinjon Resnick, NYU, on activity and scene understanding

Cinjon Resnick was formerly from Google Brain and now is doing his PhD at NYU. We talk about why he believes scene understanding is critical to out of distribution generalization, and how his theses have evolved since he started his PhD.

Some topics we over:

  • How Cinjon started his research by trying to grow a baby through language and games, before running into a wall with this approach
  • How spending time at circuses 🎪 and with gymnasts 🤸🏽‍♂️ re-invigorated his research, and convinced him to focus on video, motion, and activity recognition
  • Why MetaSIM and MetaSIM II are underrated papers
  • Two research ideas Cinjon would like to see others work on
Feb 01, 202159:54
Episode 02: Sarah Jane Hong, Latent Space, on neural rendering & research process

Episode 02: Sarah Jane Hong, Latent Space, on neural rendering & research process

Sarah Jane Hong is the co-founder of Latent Space, a startup building the first fully AI-rendered 3D engine in order to democratize creativity.

We touch on what it was like taking classes under Geoff Hinton in 2013, the trouble with using natural language prompts to render a scene, why a model’s ability to scale is more important than getting state-of-the-art results, and more.

Jan 07, 202135:42
Episode 01: Kelvin Guu, Google AI, on language models & overlooked research problems

Episode 01: Kelvin Guu, Google AI, on language models & overlooked research problems

We interview Kelvin Guu, a researcher at Google AI and the creator of REALM. 

The conversation is a wide-ranging tour of language models, how computers interact with world knowledge, and much more.

Dec 15, 202047:31