Skip to main content
The Embodied AI Podcast

The Embodied AI Podcast

By Akseli Ilmanen

We learn about the world through interaction - through a body. The Embodied AI Podcast believes artificial intelligence should do the same. I interview experts in philosophy, neuroscience, artificial intelligence, robotics, linguistics and more. Join me on a journey from symbolic AI to deep learning, from information processing to distributed cognition, from Wittgenstein to Natural Language Processing, from phenomenology to robots, from x to y, you decide!

Twitter: twitter.com/akseli_ilmanen
Website: linktr.ee/akseli_ilmanen
Email: lai24@bath.ac.uk
Available on
Apple Podcasts Logo
Spotify Logo
Currently playing episode

#2 Barbara Webb: Insect Robotics

The Embodied AI PodcastMar 11, 2022

00:00
01:01:16
Second podcast on Brain, Space and Time!

Second podcast on Brain, Space and Time!

Hi! I started a second podcast (the Brain Space Time Podcast), available here! Listen to this episode to find about it!


Timestamps:

(00:00) - What is the Brain Space Time Podcast about?

(01:39) - The podcast logo explained.

(05:28) - Getting in touch.


Links

Henri Bergson's 1986 Matter and memory PDF (Cone figure on p. 61)

Uri Hasson on temporal receptive windows paper


Follow me

For updates on new episode releases, follow me on Twitter.

I welcome your comments, questions, and suggestions. Feel free to email me at akseli.ilmanen@gmail.com

If you are interested in my other work, click here to look at my blog, website, or (ongoing) Bachelor dissertation on time perception semantic networks.

Jul 14, 202305:48
#7 Tony Zador: The Embodied Turing Test, Genomic Bottlenecks, Molecular Connectomics

#7 Tony Zador: The Embodied Turing Test, Genomic Bottlenecks, Molecular Connectomics

Tony has a lab at Cold Spring Harbour, New York. Using rodents, his lab studies the neural circuits underlying auditory decisions. He is also developing new technologies for connectome sequencing and does some NeuroAI work. In the episode, after a detour on language and the Costa Rican singing mouse, we discuss his recent paper on 'The Embodied Turing Test' and Moravec's paradox; the idea that what we find hard is easy for AI, and vice versa. We explore how Tony's work in creating a rodent decision-making model might inform a virtual platform for embodied animal-like agents. Evolution is an underlying thread in the discussion, including his work on the genomic bottleneck, which might 'be a feature, not a bug'.  We discuss how Tony is revolutionizing connectomics using molecular sequencing and why people in AI should care about connectomics and architecture more generally. Finally, we discuss some cultural questions as to why some people might believe more or less in 'Human uniqueness' vs evolutionary continuity and some career questions. 


Timestamps:

(00:00) - Intro

(02:08) - Tony's background, Costa Rican singing mouse

(06:59) - Traditional & embodied Turing Test, large language models

(15:16) - Mouse intelligence, evolution, modularity, dish-washing dogs?

(26:16) - Platform for training non-human animal-like virtual agents

(36:14) - Exploration in children vs animals, innate vs learning, cognitive maps, complementary learning systems theory

(46:53) - Genomic bottleneck, transfer learning, artificial Laplacian evolution

(01:02:06) - Why AI needs connectomics?

(01:06:55) - Brainbow, molecular connectomics: MAPseq & BRICseq

(01:14:52) - Comparative (corvid) connectomics

(01:18:04) - "Human uniqueness" - why do/ don't people believe in evolutionary continuity

(01:25:29) - Career questions & virtual mouse passing the Embodied Turing Test in 5 years?


Tony's lab website

Tony's Twitter

My Twitter


Papers

Toward Next-Generation Artificial Intelligence: Catalyzing the NeuroAI Revolution - Embodied Turing Test paper (2022)

A critique of pure learning and what artificial neural networks can learn from animal brains paper (2019)

Genomic bottleneck paper (2021)

MAPseq paper (2016)

BRICseq paper (2020)


Squirrel ninja warrior course video

Marbled Lungfish wiki

Paper on corvids



For updates about the latest episodes, follow me on Twitter.

I welcome your comments, questions, and suggestions. Feel free to email me at akseli.ilmanen@gmail.com

If you are interested in my other work, click here to look

Jan 15, 202301:29:50
#6 Alex Lascarides: Linguistics from Frege to Settlers of Catan

#6 Alex Lascarides: Linguistics from Frege to Settlers of Catan

Alex is a professor and the director of the Institution for Language, Cognition and Computation at Edinburgh. She is interested in discourse coherence, gestures, complex games and interactive task learning. After we find out about Alex's background and geek out over Ludwig Wittgenstein, she tells us about Dynamic Semantics and Segmented Discourse Representation Theory (SDRT). SDRT considers discourse as actions that change the state space of the world and requires agents to infer coherence in the discourse. Then, I initiate a discussion between Felix Hill and Alex by asking her about her opinion on compositionality and playing a clip where Felix gives his "spicy take" on theoretical linguistics. Next, we talk about gestures and how they could be analysed using logic or a deep learning classifier. Then, we talk about non-linguistic events and the conceptualization problem. Later, we discuss Alex's work on Settlers of Catan, and how this links to deep reinforcement learning, Monte Carlo tree search, and neurosymbolic AI. Next, we briefly bring up game theory and then talk about interactive task learning, which is about agents learning and adapting in unknown domains. Finally, there are some career questions on whether to do a PhD and what makes a good supervisee & supervisor.


Timestamps:

(00:00) - Intro

(02:00) - Alex's background & Wittgenstein geekiness

(05:15) - Discourse Coherence & Semantic Discourse Representation Theory (SDRT)

(12:56) - Compositionality, Responding to Felix Hill's "spicy take"

(23:50) - Analysing gestures with logic and deep learning

(38:54) - Pointing and evolution

(42:28) - Non-linguistics events in Settlers of Catan, conceptualization problem

(54:15) - 3D simulations and supermarket stocktaking

(59:19) - Settlers of Catan, Monte Carlo tree search, neurosymbolic AI

(01:11:08) - Persuasion & Game Theory

(01:17:23) - Interactive Task Learning, symbol grounding, unknown domain

(01:25:28) - Career advice


Alex Webpage (All articles are open access)

My Twitter


Talks and Papers

Talk on Discourse Coherence and Segmented Discourse Representation Theory

A Formal Semantic Analysis of Gesture paper with Matthew Stone paper

A formal semantics for situated conversation paper with Julie Hunter & Nicholas Asher paper

Game strategies for The Settlers of Catan paper with Markus Guhe paper

Evaluating Persuasion Strategies and Deep Reinforcement Learning methods for Negotiation Dialogue agents paper with Simon Keizer , Markus Guhe, & Oliver Lemon paper

Learning Language Games through Interaction paper with Sida Wang, Percy Liang, Christopher Manning paper

Interactive Task Learning Paper with Mattias Appelgren paper



Follow the podcast

For new episode releases, follow me on Twitter.

I welcome your comments, questions, and suggestions. Feel free to email me at akseli.ilmanen@gmail.com

If you are interest

Jul 14, 202201:36:57
#5 Felix Hill: Grounded Language, Transformers, and DeepMind

#5 Felix Hill: Grounded Language, Transformers, and DeepMind

Felix is a research scientist at DeepMind. He is interested in grounded language understanding and natural language processing (NLP). After finding out about Felix's background, we bring up compositionality and explore why natural language is NonCompositional (also, the name of Felix's blog). Then, Felix tells us a bit about his work in Cambridge on abstract vs concrete concepts and gives us a quick crash course on the role of recurrent neural networks (RNNs), long short-term memory (LSTMs), and transformers in language models. Next, we talk about Jeff Elman's landmark paper 'Finding Structure in Time' and how neural networks can learn to understand analogies. After, we discuss the core of Felix work: Training language agents in 3D simulations, where we raise some questions on language learning as an embodied agent in space and time, and Allan Paivio's dual coding theory implemented in the memory of a language model. Next, we stick with the theme of memory retrieval and discuss Felix and Andrew Lampinen's work on 'mental time travel' in language models. Finally, I ask Felix on some good strategies on how to get into DeepMind and the best way to learn NLP.


Timestamps:

(00:00) - Intro

(07:57) - Compositionality in natural language

(16:42) - Abstract vs concrete concepts

(24:03) - RNNs, LSTMs, Transformers

(34:12) - Prediction, time and Jeff Elman

(48:04) - Neural networks & analogies

(56:32) - Grounded language, 3D simulations, babies,

(01:05:20) - Keeping vision and language data separate

(01:13:51) - NeuroAI and mental time travel

(01:21:47) - Getting into DeepMind and learning NLP


Felix Website (good overview for his papers)


Papers

Abstract vs concrete concepts paper

Jeff Elman (1990): Finding structure in time paper

Analogies paper

Dual coding theory paper

Mental Time Travel paper


My Twitter

My LinkedIn

Jul 06, 202201:34:49
#4 Beren Millidge: Reinforcement Learning through Active Inference

#4 Beren Millidge: Reinforcement Learning through Active Inference

Beren is a postdoc in Oxford with a background in machine learning and computational neuroscience. He is interested in Active Inference (related to the Free Energy Principle) and how the cortex can perform long-term credit assignment as deep artificial neural networks do. We start off with some shorter questions on the Free Energy Principle and its background concepts. Next, we get onto the exploration vs exploitation dilemma in reinforcement learning and Beren's strategy on how to maximize expected reward from restaurant visits - it's a long episode :=). We also discuss multimodal representations, shallow minima, autism, and enactivism. Then, we explore predictive coding going all the way from the phenomenon of visual fading, to 20-eyed reinforcement learning agents and the 'Anti-Grandmother Cell'. Finally, we discuss some open questions about backpropagation and the role of time in the brain, and finish the episode with some career advice about writing, publishing, and Beren's future projects!


Timestamps:

(00:00) - Intro

(02:11) - The Free Energy Principle, Active Inference, and Reinforcement Learning

(13:40) - Exploration vs Exploitation

(26:47) - Multimodal representation, shallow minima, autism

(36:11) - Biased generative models, enactivism, and representation in the brain?

(45:21) - Fixational eye movements, predictive coding, and 20-eyed RL

(52:57) - Precision, attention, and dopamine

(01:01:51) - Sparsity, negative prediction errors, and the 'Anti-Grandmother Cell'

(01:11:23) - Backpropagation in the brain?

(01:19:25) - Time in machine learning and the brain?

(01:25:32) - Career Questions


Beren's Twitter

Beren's Google Scholar

My Twitter


Papers

Deep active inference as variational policy gradients paper

Predictive Coding Approximates Backprop Along Arbitrary Computation Graphs paper

Predictive Coding: a Theoretical and Experimental Review paper

Jun 29, 202201:35:46
#3 Mark Sprevak: 4E, The Chinese Room Argument, Predictive Coding

#3 Mark Sprevak: 4E, The Chinese Room Argument, Predictive Coding

Mark is a philosopher of computation and cognitive science at Edinburgh. We start off the conversation exploring why we shouldn't attribute computation to stones and talk about instances of distributed cognition in classical antiquity. Then, we discuss the relationship between functionalism and extended cognition with the paradigmatic example of Otto's notebook and some implications for deep learning researchers. Next off is the famous Chinese Room Argument and how the 'Robot Reply' illustrates the need for embodiment when going from 'cat' syntax to cat semantics. After a quick rendezvous with the frame problem (see also Ep1), Hubert Dreyfus and Heideggerian AI, we move onto predictive coding, David Marr's three levels of analysis and the idea of representation in the brain. We finish off the conversation with some very good reading strategies and why we should all move to Edinburgh.


Timestamps:

(00:00) - Intro

(03:04) - Does a stone do computation?

(08:16) - Distributed cognition in classical antiquity

(20:27) - Functionalism and extended cognition

(33:00) - Chinese Room Argument & Robot Reply

(45:51) - Frame Problem, Hubert Dreyfus

(56:47) - David Marr's Three Levels, Predictive Coding and Representation in the Brain

(01:16:14) - Career advice & Why Edinburgh is the best


Mark's Website (All of Mark's publications are freely available there)

My Twitter


Papers

Clark and Chalmers 1998 paper - The Extended Mind

Ballard et al. 1997 paper - Off-loading information onto the environment

Rao and Ballard 1999 paper - Predictive Coding

Spratling 2008 paper - Predictive Coding

May 24, 202201:24:56
#2 Barbara Webb: Insect Robotics

#2 Barbara Webb: Insect Robotics

Barbara is a professor of Biorobotics at Edinburgh. We start with a quick philosophical exploration of robots using chairs, James Gibson's concept 'affordances', and whether insects have meaning. Next, we talk about how robots can be used to test hypotheses in biology. For most of the episode, we discuss the incredible things crickets, ants and the mushroom body can do. We explore some interesting questions such as: How does embodiment in crickets replace the need for neural processing? How do ants integrate different sensory modalities? Do insects have consciousness? And can we find associative general-purpose brain regions in the insect brain? We also discuss how her robotics work can inform predictive coding and reinforcement learning. As usual, we finish off with a career question and her future projects.

Timestamps:

(00:00) - Intro

(02:43) - Philosophy questions

(08:11) - Robot models to test biological hypotheses

(17:45) - Cricket bodies, cricket robots, and cricket music

(26:58) - Ants, spatial navigation, visual memory

(30:14) - Insect learning, sparse coding, insect consciousness

(39:25) - Lessons for embodied AI, predictive coding

(43:57) - Reinforcement learning paper

(50:58) - Career advice

(56:57) - Upcoming project GRASP, postdoc positions


Barbara's website


Talks and papers by Barbara :

Talk on ants, other insects:

Talk on n crickets

7 dimensions for robot models paper

Reinforcement Learning paper

Multimodal sensory integration in insects paper


Postdoc at Edinburgh

Insect AI

InsectNeuroNano


My Twitter

Mar 11, 202201:01:16
#1 Ron Chrisley: Embodied AI, Non-Conceptual Content, AI Creativity

#1 Ron Chrisley: Embodied AI, Non-Conceptual Content, AI Creativity

In the first episode of the Embodied AI Podcast, Ron tells us about his journey from Stanford to machine learning and the people that inspired him to dig deeper on philosophical questions around embodiment. Ron lays out 4 dimensions to think about Embodied AI and situates its role in the history of AI - mainly the move away from Symbolic AI. We take a close look at his 2003 paper on Embodied Artificial Intelligence, making connections to the relevance problem, Lewis Carroll's What the Tortoise Said to Achilles, Ludwig Wittgenstein, and current matching learning. Ron also discusses his work on non-conceptual content and synthetic phenomenology, showing us how we can use embodied technologies to study non-systematic aspects of our experience or that of a robot. We finish off with his recent ideas about AI creativity and the future of Embodied AI, with some career advice for younger listeners.


Timestamps: 

(00:00) - Intro

(01:48) - Ron's background

(06:08) - What is Embodied AI?

(25:21) -  Symbolic AI

(35:10) - 2003 Paper: Relevance/Frame Problem

(49:36) - Embodied AI and current machine learning

(52:49) - Non-Conceptual Content 

(1:01:30) - Synthetic Phenomenology

(1:05:48) - AI creativity

(1:19:42) - Career Advice

(1:24:23) - Future of Embodied AI 


Ron's Sussex webpage

Ron's Blog page - see for synthetic phenomenology artwork: 


Papers

2003 Embodied Artificial Intelligence paper

1996 DPhil Thesis on Non-Conceptual Content

Douglas Hofstadter: Waking Up from the Boolean Dream, or, Subcognition as Computation paper

Paper by Tom Froese and Tom Zienke: 


My Twitter

Feb 18, 202201:29:12
Welcome to the Embodied AI Podcast!

Welcome to the Embodied AI Podcast!

A short episode, where I discuss what the podcast is about. 


My Twitter

Email - lai24@bath.ac.uk


Ludwig Wittgenstein:

Stanford Encyclopedia of Philosophy overview

Tractatus Logico-Philosophicus PDF

Philosophical Investigations PDF


Hubert Dreyfus:

What Computers Still Can't Do: A Critique of Artificial Reason book


Music Credit

https://uppbeat.io/t/torus/progression

License code: NT6SRVLHJCBXH1YE

Dec 14, 202101:15