Skip to main content
Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

By Machine Learning Street Talk (MLST)

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).
Available on
Apple Podcasts Logo
Google Podcasts Logo
Overcast Logo
Pocket Casts Logo
RadioPublic Logo
Spotify Logo
Currently playing episode

#97 SREEJAN KUMAR - Human Inductive Biases in Machines from Language

Machine Learning Street Talk (MLST)Jan 28, 2023

00:00
24:58
Prof. Chris Bishop's NEW Deep Learning Textbook!

Prof. Chris Bishop's NEW Deep Learning Textbook!

Professor Chris Bishop is a Technical Fellow and Director at Microsoft Research AI4Science, in Cambridge. He is also Honorary Professor of Computer Science at the University of Edinburgh, and a Fellow of Darwin College, Cambridge. In 2004, he was elected Fellow of the Royal Academy of Engineering, in 2007 he was elected Fellow of the Royal Society of Edinburgh, and in 2017 he was elected Fellow of the Royal Society. Chris was a founding member of the UK AI Council, and in 2019 he was appointed to the Prime Minister’s Council for Science and Technology.


At Microsoft Research, Chris oversees a global portfolio of industrial research and development, with a strong focus on machine learning and the natural sciences.

Chris obtained a BA in Physics from Oxford, and a PhD in Theoretical Physics from the University of Edinburgh, with a thesis on quantum field theory.


Chris's contributions to the field of machine learning have been truly remarkable. He has authored (what is arguably) the original textbook in the field - 'Pattern Recognition and Machine Learning' (PRML) which has served as an essential reference for countless students and researchers around the world, and that was his second textbook after his highly acclaimed first textbook Neural Networks for Pattern Recognition.


Recently, Chris has co-authored a new book with his son, Hugh, titled 'Deep Learning: Foundations and Concepts.' This book aims to provide a comprehensive understanding of the key ideas and techniques underpinning the rapidly evolving field of deep learning. It covers both the foundational concepts and the latest advances, making it an invaluable resource for newcomers and experienced practitioners alike.


Buy Chris' textbook here:

https://amzn.to/3vvLcCh


More about Prof. Chris Bishop:

https://en.wikipedia.org/wiki/Christopher_Bishop

https://www.microsoft.com/en-us/research/people/cmbishop/


Support MLST:

Please support us on Patreon. We are entirely funded from Patreon donations right now. Patreon supports get private discord access, biweekly calls, early-access + exclusive content and lots more.

https://patreon.com/mlst

Donate: https://www.paypal.com/donate/?hosted_button_id=K2TYRVPBGXVNA

If you would like to sponsor us, so we can tell your story - reach out on mlstreettalk at gmail


TOC:

00:00:00 - Intro to Chris

00:06:54 - Changing Landscape of AI

00:08:16 - Symbolism

00:09:32 - PRML

00:11:02 - Bayesian Approach

00:14:49 - Are NNs One Model or Many, Special vs General

00:20:04 - Can Language Models Be Creative

00:22:35 - Sparks of AGI

00:25:52 - Creativity Gap in LLMs

00:35:40 - New Deep Learning Book

00:39:01 - Favourite Chapters

00:44:11 - Probability Theory

00:45:42 - AI4Science

00:48:31 - Inductive Priors

00:58:52 - Drug Discovery

01:05:19 - Foundational Bias Models

01:07:46 - How Fundamental Is Our Physics Knowledge?

01:12:05 - Transformers

01:12:59 - Why Does Deep Learning Work?

01:16:59 - Inscrutability of NNs

01:18:01 - Example of Simulator

01:21:09 - Control

Apr 10, 202401:22:59
Philip Ball - How Life Works

Philip Ball - How Life Works

Dr. Philip Ball is a freelance science writer. He just wrote a book called "How Life Works", discussing the how the science of Biology has advanced in the last 20 years. We focus on the concept of Agency in particular.


He trained as a chemist at the University of Oxford, and as a physicist at the University of Bristol. He worked previously at Nature for over 20 years, first as an editor for physical sciences and then as a consultant editor. His writings on science for the popular press have covered topical issues ranging from cosmology to the future of molecular biology.


YT: https://www.youtube.com/watch?v=n6nxUiqiz9I


Transcript link on YT description


Philip is the author of many popular books on science, including H2O: A Biography of Water, Bright Earth: The Invention of Colour, The Music Instinct and Curiosity: How Science Became Interested in Everything. His book Critical Mass won the 2005 Aventis Prize for Science Books, while Serving the Reich was shortlisted for the Royal Society Winton Science Book Prize in 2014.


This is one of Tim's personal favourite MLST shows, so we have designated it a special edition. Enjoy!


Buy Philip's book "How Life Works" here: https://amzn.to/3vSmNqp


Support MLST: Please support us on Patreon. We are entirely funded from Patreon donations right now. Patreon supports get private discord access, biweekly calls, early-access + exclusive content and lots more. https://patreon.com/mlst Donate: https://www.paypal.com/donate/?hosted... If you would like to sponsor us, so we can tell your story - reach out on mlstreettalk at gmail

Apr 07, 202402:09:17
Dr. Paul Lessard - Categorical/Structured Deep Learning

Dr. Paul Lessard - Categorical/Structured Deep Learning

Dr. Paul Lessard and his collaborators have written a paper on "Categorical Deep Learning and Algebraic Theory of Architectures". They aim to make neural networks more interpretable, composable and amenable to formal reasoning. The key is mathematical abstraction, as exemplified by category theory - using monads to develop a more principled, algebraic approach to structuring neural networks.


We also discussed the limitations of current neural network architectures in terms of their ability to generalise and reason in a human-like way. In particular, the inability of neural networks to do unbounded computation equivalent to a Turing machine. Paul expressed optimism that this is not a fundamental limitation, but an artefact of current architectures and training procedures.


The power of abstraction - allowing us to focus on the essential structure while ignoring extraneous details. This can make certain problems more tractable to reason about. Paul sees category theory as providing a powerful "Lego set" for productively thinking about many practical problems.


Towards the end, Paul gave an accessible introduction to some core concepts in category theory like categories, morphisms, functors, monads etc. We explained how these abstract constructs can capture essential patterns that arise across different domains of mathematics.


Paul is optimistic about the potential of category theory and related mathematical abstractions to put AI and neural networks on a more robust conceptual foundation to enable interpretability and reasoning. However, significant theoretical and engineering challenges remain in realising this vision.


Please support us on Patreon. We are entirely funded from Patreon donations right now.

https://patreon.com/mlst

If you would like to sponsor us, so we can tell your story - reach out on mlstreettalk at gmail


Links:

Categorical Deep Learning: An Algebraic Theory of Architectures

Bruno Gavranović, Paul Lessard, Andrew Dudzik,

Tamara von Glehn, João G. M. Araújo, Petar Veličković

Paper: https://categoricaldeeplearning.com/


Symbolica:

https://twitter.com/symbolica

https://www.symbolica.ai/


Dr. Paul Lessard (Principal Scientist - Symbolica)

https://www.linkedin.com/in/paul-roy-lessard/


Interviewer: Dr. Tim Scarfe


TOC:

00:00:00 - Intro

00:05:07 - What is the category paper all about

00:07:19 - Composition

00:10:42 - Abstract Algebra

00:23:01 - DSLs for machine learning

00:24:10 - Inscrutibility

00:29:04 - Limitations with current NNs

00:30:41 - Generative code / NNs don't recurse

00:34:34 - NNs are not Turing machines (special edition)

00:53:09 - Abstraction

00:55:11 - Category theory objects

00:58:06 - Cat theory vs number theory

00:59:43 - Data and Code are one in the same

01:08:05 - Syntax and semantics

01:14:32 - Category DL elevator pitch

01:17:05 - Abstraction again

01:20:25 - Lego set for the universe

01:23:04 - Reasoning

01:28:05 - Category theory 101

01:37:42 - Monads

01:45:59 - Where to learn more cat theory

Apr 01, 202401:49:10
Can we build a generalist agent? Dr. Minqi Jiang and Dr. Marc Rigter

Can we build a generalist agent? Dr. Minqi Jiang and Dr. Marc Rigter

Dr. Minqi Jiang and Dr. Marc Rigter explain an innovative new method to make the intelligence of agents more general-purpose by training them to learn many worlds before their usual goal-directed training, which we call "reinforcement learning". Their new paper is called "Reward-free curricula for training robust world models" https://arxiv.org/pdf/2306.09205.pdf https://twitter.com/MinqiJiang https://twitter.com/MarcRigter Interviewer: Dr. Tim Scarfe Please support us on Patreon, Tim is now doing MLST full-time and taking a massive financial hit. If you love MLST and want this to continue, please show your support! In return you get access to shows very early and private discord and networking. https://patreon.com/mlst We are also looking for show sponsors, please get in touch if interested mlstreettalk at gmail. MLST Discord: https://discord.gg/machine-learning-street-talk-mlst-937356144060530778

Mar 20, 202401:57:11
Prof. Nick Chater - The Language Game (Part 1)

Prof. Nick Chater - The Language Game (Part 1)

Nick Chater is Professor of Behavioural Science at Warwick Business School, who works on rationality and language using a range of theoretical and experimental approaches. We discuss his books The Mind is Flat, and the Language Game.


Please support me on Patreon (this is now my main job!) - https://patreon.com/mlst - Access the private Discord, networking, and early access to content.

MLST Discord: https://discord.gg/machine-learning-street-talk-mlst-937356144060530778

https://twitter.com/MLStreetTalk


Buy The Language Game:

https://amzn.to/3SRHjPm


Buy The Mind is Flat:

https://amzn.to/3P3BUUC


YT version: https://youtu.be/5cBS6COzLN4


https://www.wbs.ac.uk/about/person/nick-chater/

https://twitter.com/nickjchater?lang=en

Mar 01, 202401:43:47
Kenneth Stanley created a new social network based on serendipity and divergence

Kenneth Stanley created a new social network based on serendipity and divergence

See what Sam Altman advised Kenneth when he left OpenAI! Professor Kenneth Stanley has just launched a brand new type of social network, which he calls a "Serendipity network". The idea is that you follow interests, NOT people. It's a social network without the popularity contest. We discuss the phgilosophy and technology behind the venture in great detail. The main ideas of which came from Kenneth's famous book "Why greatness cannot be planned".


See what Sam Altman advised Kenneth when he left OpenAI! Professor Kenneth Stanley has just launched a brand new type of social network, which he calls a "Serendipity network".The idea is that you follow interests, NOT people. It's a social network without the popularity contest.

YT version: https://www.youtube.com/watch?v=pWIrXN-yy8g


Chapters should be baked into the MP3 file now

MLST public Discord: https://discord.gg/machine-learning-street-talk-mlst-937356144060530778 Please support our work on Patreon - get access to interviews months early, private Patreon, networking, exclusive content and regular calls with Tim and Keith. https://patreon.com/mlst Get Maven here: https://www.heymaven.com/ Kenneth: https://twitter.com/kenneth0stanley https://www.kenstanley.net/home Host - Tim Scarfe: https://www.linkedin.com/in/ecsquizor/ https://www.mlst.ai/ Original MLST show with Kenneth: https://www.youtube.com/watch?v=lhYGXYeMq_E

Tim explains the book more here:

https://www.youtube.com/watch?v=wNhaz81OOqw



Feb 28, 202403:15:27
Dr. Brandon Rohrer - Robotics, Creativity and Intelligence

Dr. Brandon Rohrer - Robotics, Creativity and Intelligence

Brandon Rohrer who obtained his Ph.D from MIT is driven by understanding algorithms ALL the way down to their nuts and bolts, so he can make them accessible to everyone by first explaining them in the way HE himself would have wanted to learn!


Please support us on Patreon for loads of exclusive content and private Discord:

https://patreon.com/mlst (public discord)

https://discord.gg/aNPkGUQtc5

https://twitter.com/MLStreetTalk


Brandon Rohrer is a seasoned data science leader and educator with a rich background in creating robust, efficient machine learning algorithms and tools. With a Ph.D. in Mechanical Engineering from MIT, his expertise encompasses a broad spectrum of AI applications — from computer vision and natural language processing to reinforcement learning and robotics. Brandon's career has seen him in Principle-level roles at Microsoft and Facebook. An educator at heart, he also shares his knowledge through detailed tutorials, courses, and his forthcoming book, "How to Train Your Robot."


YT version: https://www.youtube.com/watch?v=4Ps7ahonRCY


Brandon's links:

https://github.com/brohrer

https://www.youtube.com/channel/UCsBKTrp45lTfHa_p49I2AEQ

https://www.linkedin.com/in/brohrer/


How transformers work:

https://e2eml.school/transformers


Brandon's End-to-End Machine Learning school courses, posts, and tutorials

https://e2eml.school


Free course:

https://end-to-end-machine-learning.teachable.com/p/complete-course-library-full-end-to-end-machine-learning-catalog


Blog: https://e2eml.school/blog.html


Ziptie: Learning Useful Features [Brandon Rohrer]

https://www.brandonrohrer.com/ziptie


TOC should be baked into the MP3 file now

00:00:00 - Intro to Brandon

00:00:36 - RLHF

00:01:09 - Limitations of transformers

00:07:23 - Agency - we are all GPTs

00:09:07 - BPE / representation bias

00:12:00 - LLM true believers

00:16:42 - Brandon's style of teaching

00:19:50 - ML vs real world = Robotics

00:29:59 - Reward shaping

00:37:08 - No true Scotsman - when do we accept capabilities as real

00:38:50 - Externalism

00:43:03 - Building flexible robots

00:45:37 - Is reward enough

00:54:30 - Optimization curse

00:58:15 - Collective intelligence

01:01:51 - Intelligence + creativity

01:13:35 - ChatGPT + Creativity

01:25:19 - Transformers Tutorial

Feb 13, 202401:31:42
Showdown Between e/acc Leader And Doomer - Connor Leahy + Beff Jezos

Showdown Between e/acc Leader And Doomer - Connor Leahy + Beff Jezos

The world's second-most famous AI doomer Connor Leahy sits down with Beff Jezos, the founder of the e/acc movement debating technology, AI policy, and human values. As the two discuss technology, AI safety, civilization advancement, and the future of institutions, they clash on their opposing perspectives on how we steer humanity towards a more optimal path.


Watch behind the scenes, get early access and join the private Discord by supporting us on Patreon. We have some amazing content going up there with Max Bennett and Kenneth Stanley this week! https://patreon.com/mlst (public discord) https://discord.gg/aNPkGUQtc5 https://twitter.com/MLStreetTalk


Post-interview with Beff and Connor: https://www.patreon.com/posts/97905213

Pre-interview with Connor and his colleague Dan Clothiaux: https://www.patreon.com/posts/connor-leahy-and-97631416


Leahy, known for his critical perspectives on AI and technology, challenges Jezos on a variety of assertions related to the accelerationist movement, market dynamics, and the need for regulation in the face of rapid technological advancements. Jezos, on the other hand, provides insights into the e/acc movement's core philosophies, emphasizing growth, adaptability, and the dangers of over-legislation and centralized control in current institutions.


Throughout the discussion, both speakers explore the concept of entropy, the role of competition in fostering innovation, and the balance needed to mediate order and chaos to ensure the prosperity and survival of civilization. They weigh up the risks and rewards of AI, the importance of maintaining a power equilibrium in society, and the significance of cultural and institutional dynamism.


Beff Jezos (Guillaume Verdon): https://twitter.com/BasedBeffJezos https://twitter.com/GillVerd Connor Leahy: https://twitter.com/npcollapse


YT: https://www.youtube.com/watch?v=0zxi0xSBOaQ


TOC:

00:00:00 - Intro

00:03:05 - Society library reference

00:03:35 - Debate starts

00:05:08 - Should any tech be banned?

00:20:39 - Leaded Gasoline

00:28:57 - False vacuum collapse method?

00:34:56 - What if there are dangerous aliens?

00:36:56 - Risk tolerances

00:39:26 - Optimizing for growth vs value

00:52:38 - Is vs ought

01:02:29 - AI discussion

01:07:38 - War / global competition

01:11:02 - Open source F16 designs

01:20:37 - Offense vs defense

01:28:49 - Morality / value

01:43:34 - What would Conor do

01:50:36 - Institutions/regulation

02:26:41 - Competition vs. Regulation Dilemma

02:32:50 - Existential Risks and Future Planning

02:41:46 - Conclusion and Reflection


Note from Tim: I baked the chapter metadata into the mp3 file this time, does that help the chapters show up in your app? Let me know. Also I accidentally exported a few minutes of dead audio at the end of the file - sorry about that just skip on when the episode finishes.

Feb 03, 202403:00:19
Mahault Albarracin - Cognitive Science

Mahault Albarracin - Cognitive Science

Watch behind the scenes, get early access and join the private Discord by supporting us on Patreon:

https://patreon.com/mlst (public discord)

https://discord.gg/aNPkGUQtc5

https://twitter.com/MLStreetTalk


YT version: https://youtu.be/n8G50ynU0Vg


In this interview on MLST, Dr. Tim Scarfe interviews Mahault Albarracin, who is the director of product for R&D at VERSES and also a PhD student in cognitive computing at the University of Quebec in Montreal. They discuss a range of topics related to consciousness, cognition, and machine learning.


Throughout the conversation, they touch upon various philosophical and computational concepts such as panpsychism, computationalism, and materiality. They consider the "hard problem" of consciousness, which is the question of how and why we have subjective experiences.


Albarracin shares her views on the controversial Integrated Information Theory and the open letter of opposition it received from the scientific community. She reflects on the nature of scientific critique and rivalry, advising caution in declaring entire fields of study as pseudoscientific.


A substantial part of the discussion is dedicated to the topic of science itself, where Albarracin talks about thresholds between legitimate science and pseudoscience, the role of evidence, and the importance of validating scientific methods and claims.


They touch upon language models, discussing whether they can be considered as having a "theory of mind" and the implications of assigning such properties to AI systems. Albarracin challenges the idea that there is a pure form of intelligence independent of material constraints and emphasizes the role of sociality in the development of our cognitive abilities.


Albarracin offers her thoughts on scientific endeavors, the predictability of systems, the nature of intelligence, and the processes of learning and adaptation. She gives insights into the concept of using degeneracy as a way to increase resilience within systems and the role of maintaining a degree of redundancy or extra capacity as a buffer against unforeseen events.


The conversation concludes with her discussing the potential benefits of collective intelligence, likening the adaptability and resilience of interconnected agent systems to those found in natural ecosystems.


https://www.linkedin.com/in/mahault-albarracin-1742bb153/


00:00:00 - Intro / IIT scandal

00:05:54 - Gaydar paper / What makes good science

00:10:51 - Language

00:18:16 - Intelligence

00:29:06 - X-risk

00:40:49 - Self modelling

00:43:56 - Anthropomorphisation

00:46:41 - Mediation and subjectivity

00:51:03 - Understanding

00:56:33 - Resiliency


Technical topics:

1. Integrated Information Theory (IIT) - Giulio Tononi

2. The "hard problem" of consciousness - David Chalmers

3. Panpsychism and Computationalism in philosophy of mind

4. Active Inference Framework - Karl Friston

5. Theory of Mind and its computation in AI systems

6. Noam Chomsky's views on language models and linguistics

7. Daniel Dennett's Intentional Stance theory

8. Collective intelligence and system resilience

9. Redundancy and degeneracy in complex systems

10. Michael Levin's research on bioelectricity and pattern formation

11. The role of phenomenology in cognitive science

Jan 14, 202401:07:08
$450M AI Startup In 3 Years | Chai AI

$450M AI Startup In 3 Years | Chai AI

Chai AI is the leading platform for conversational chat artificial intelligence.

Note: this is a sponsored episode of MLST.

William Beauchamp is the founder of two $100M+ companies - Chai Research, an AI startup, and Seamless Capital, a hedge fund based in Cambridge, UK. Chaiverse is the Chai AI developer platform, where developers can train, submit and evaluate on millions of real users to win their share of $1,000,000. https://www.chai-research.com https://www.chaiverse.com https://twitter.com/chai_research https://facebook.com/chairesearch/ https://www.instagram.com/chairesearch/ Download the app on iOS and Android (https://onelink.to/kqzhy9 ) #chai #chai_ai #chai_research #chaiverse #generative_ai #LLMs

Jan 09, 202429:47
DOES AI HAVE AGENCY? With Professor. Karl Friston and Riddhi J. Pitliya

DOES AI HAVE AGENCY? With Professor. Karl Friston and Riddhi J. Pitliya

Watch behind the scenes, get early access and join the private Discord by supporting us on Patreon:

https://patreon.com/mlst (public discord)

https://discord.gg/aNPkGUQtc5

https://twitter.com/MLStreetTalk


DOES AI HAVE AGENCY? With Professor. Karl Friston and Riddhi J. Pitliya


Agency in the context of cognitive science, particularly when considering the free energy principle, extends beyond just human decision-making and autonomy. It encompasses a broader understanding of how all living systems, including non-human entities, interact with their environment to maintain their existence by minimising sensory surprise.


According to the free energy principle, living organisms strive to minimize the difference between their predicted states and the actual sensory inputs they receive. This principle suggests that agency arises as a natural consequence of this process, particularly when organisms appear to plan ahead many steps in the future.


Riddhi J. Pitliya is based in the computational psychopathology lab doing her Ph.D at the University of Oxford and works with Professor Karl Friston at VERSES.

https://twitter.com/RiddhiJP


References:


THE FREE ENERGY PRINCIPLE—A PRECIS [Ramstead]

https://www.dialecticalsystems.eu/contributions/the-free-energy-principle-a-precis/


Active Inference: The Free Energy Principle in Mind, Brain, and Behavior [Thomas Parr, Giovanni Pezzulo, Karl J. Friston]

https://direct.mit.edu/books/oa-monograph/5299/Active-InferenceThe-Free-Energy-Principle-in-Mind


The beauty of collective intelligence, explained by a developmental biologist | Michael Levin

https://www.youtube.com/watch?v=U93x9AWeuOA


Growing Neural Cellular Automata

https://distill.pub/2020/growing-ca


Carcinisation

https://en.wikipedia.org/wiki/Carcinisation


Prof. KENNETH STANLEY - Why Greatness Cannot Be Planned

https://www.youtube.com/watch?v=lhYGXYeMq_E


On Defining Artificial Intelligence [Pei Wang]

https://sciendo.com/article/10.2478/jagi-2019-0002


Why? The Purpose of the Universe [Goff]

https://amzn.to/4aEqpfm


Umwelt

https://en.wikipedia.org/wiki/Umwelt


An Immense World: How Animal Senses Reveal the Hidden Realms [Yong]

https://amzn.to/3tzzTb7


What's it like to be a bat [Nagal]

https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf


COUNTERFEIT PEOPLE. DANIEL DENNETT. (SPECIAL EDITION)

https://www.youtube.com/watch?v=axJtywd9Tbo


We live in the infosphere [FLORIDI]

https://www.youtube.com/watch?v=YLNGvvgq3eg


Mark Zuckerberg: First Interview in the Metaverse | Lex Fridman Podcast #398

https://www.youtube.com/watch?v=MVYrJJNdrEg


Black Mirror: Rachel, Jack and Ashley Too | Official Trailer | Netflix

https://www.youtube.com/watch?v=-qIlCo9yqpY

Jan 07, 202401:02:39
Understanding Deep Learning - Prof. SIMON PRINCE [STAFF FAVOURITE]

Understanding Deep Learning - Prof. SIMON PRINCE [STAFF FAVOURITE]

Watch behind the scenes, get early access and join private Discord by supporting us on Patreon: https://patreon.com/mlst

https://discord.gg/aNPkGUQtc5

https://twitter.com/MLStreetTalk


In this comprehensive exploration of the field of deep learning with Professor Simon Prince who has just authored an entire text book on Deep Learning, we investigate the technical underpinnings that contribute to the field's unexpected success and confront the enduring conundrums that still perplex AI researchers.


Key points discussed include the surprising efficiency of deep learning models, where high-dimensional loss functions are optimized in ways which defy traditional statistical expectations. Professor Prince provides an exposition on the choice of activation functions, architecture design considerations, and overparameterization. We scrutinize the generalization capabilities of neural networks, addressing the seeming paradox of well-performing overparameterized models. Professor Prince challenges popular misconceptions, shedding light on the manifold hypothesis and the role of data geometry in informing the training process. Professor Prince speaks about how layers within neural networks collaborate, recursively reconfiguring instance representations that contribute to both the stability of learning and the emergence of hierarchical feature representations. In addition to the primary discussion on technical elements and learning dynamics, the conversation briefly diverts to audit the implications of AI advancements with ethical concerns.


Follow Prof. Prince:

https://twitter.com/SimonPrinceAI

https://www.linkedin.com/in/simon-prince-615bb9165/


Get the book now!

https://mitpress.mit.edu/9780262048644/understanding-deep-learning/

https://udlbook.github.io/udlbook/


Panel: Dr. Tim Scarfe -

https://www.linkedin.com/in/ecsquizor/

https://twitter.com/ecsquendor


TOC:

[00:00:00] Introduction

[00:11:03] General Book Discussion

[00:15:30] The Neural Metaphor

[00:17:56] Back to Book Discussion

[00:18:33] Emergence and the Mind

[00:29:10] Computation in Transformers

[00:31:12] Studio Interview with Prof. Simon Prince

[00:31:46] Why Deep Neural Networks Work: Spline Theory

[00:40:29] Overparameterization in Deep Learning

[00:43:42] Inductive Priors and the Manifold Hypothesis

[00:49:31] Universal Function Approximation and Deep Networks

[00:59:25] Training vs Inference: Model Bias

[01:03:43] Model Generalization Challenges

[01:11:47] Purple Segment: Unknown Topic

[01:12:45] Visualizations in Deep Learning

[01:18:03] Deep Learning Theories Overview

[01:24:29] Tricks in Neural Networks

[01:30:37] Critiques of ChatGPT

[01:42:45] Ethical Considerations in AI


References on YT version VD: https://youtu.be/sJXn4Cl4oww

Dec 26, 202302:06:38
Prof. BERT DE VRIES - ON ACTIVE INFERENCE

Prof. BERT DE VRIES - ON ACTIVE INFERENCE

Watch behind the scenes with Bert on Patreon: https://www.patreon.com/posts/bert-de-vries-93230722 https://discord.gg/aNPkGUQtc5 https://twitter.com/MLStreetTalk

Note, there is some mild background music on chapter 1 (Least Action), 3 (Friston) and 5 (Variational Methods) - please skip ahead if annoying. It's a tiny fraction of the overall podcast.

YT version: https://youtu.be/2wnJ6E6rQsU

Bert de Vries is Professor in the Signal Processing Systems group at Eindhoven University. His research focuses on the development of intelligent autonomous agents that learn from in-situ interactions with their environment. His research draws inspiration from diverse fields including computational neuroscience, Bayesian machine learning, Active Inference and signal processing. Bert believes that development of signal processing systems will in the future be largely automated by autonomously operating agents that learn purposeful from situated environmental interactions. Bert received nis M.Sc. (1986) and Ph.D. (1991) degrees in Electrical Engineering from Eindhoven University of Technology (TU/e) and the University of Florida, respectively. From 1992 to 1999, he worked as a research scientist at Sarnoff Research Center in Princeton (NJ, USA). Since 1999, he has been employed in the hearing aids industry, both in engineering and managerial positions. De Vries was appointed part-time professor in the Signal Processing Systems Group at TU/e in 2012. Contact: https://twitter.com/bertdv0 https://www.tue.nl/en/research/researchers/bert-de-vries https://www.verses.ai/about-us Panel: Dr. Tim Scarfe / Dr. Keith Duggar TOC: [00:00:00] Principle of Least Action [00:05:10] Patreon Teaser [00:05:46] On Friston [00:07:34] Capm Peterson (VERSES) [00:08:20] Variational Methods [00:16:13] Dan Mapes (VERSES) [00:17:12] Engineering with Active Inference [00:20:23] Jason Fox (VERSES) [00:20:51] Riddhi Jain Pitliya [00:21:49] Hearing Aids as Adaptive Agents [00:33:38] Steven Swanson (VERSES) [00:35:46] Main Interview Kick Off, Engineering and Active Inference [00:43:35] Actor / Streaming / Message Passing [00:56:21] Do Agents Lose Flexibility with Maturity? [01:00:50] Language Compression [01:04:37] Marginalisation to Abstraction [01:12:45] Online Structural Learning [01:18:40] Efficiency in Active Inference [01:26:25] SEs become Neuroscientists [01:35:11] Building an Automated Engineer [01:38:58] Robustness and Design vs Grow [01:42:38] RXInfer [01:51:12] Resistance to Active Inference? [01:57:39] Diffusion of Responsibility in a System [02:10:33] Chauvinism in "Understanding" [02:20:08] On Becoming a Bayesian Refs: RXInfer https://biaslab.github.io/rxinfer-website/ Prof. Ariel Caticha https://www.albany.edu/physics/faculty/ariel-caticha Pattern recognition and machine learning (Bishop) https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf Data Analysis: A Bayesian Tutorial (Sivia) https://www.amazon.co.uk/Data-Analysis-Bayesian-Devinderjit-Sivia/dp/0198568320 Probability Theory: The Logic of Science (E. T. Jaynes) https://www.amazon.co.uk/Probability-Theory-Principles-Elementary-Applications/dp/0521592712/ #activeinference #artificialintelligence

Nov 20, 202302:27:39
MULTI AGENT LEARNING - LANCELOT DA COSTA

MULTI AGENT LEARNING - LANCELOT DA COSTA

Please support us https://www.patreon.com/mlst

https://discord.gg/aNPkGUQtc5

https://twitter.com/MLStreetTalk


Lance Da Costa aims to advance our understanding of intelligent systems by modelling cognitive systems and improving artificial systems.

He's a PhD candidate with Greg Pavliotis and Karl Friston jointly at Imperial College London and UCL, and a student in the Mathematics of Random Systems CDT run by Imperial College London and the University of Oxford. He completed an MRes in Brain Sciences at UCL with Karl Friston and Biswa Sengupta, an MASt in Pure Mathematics at the University of Cambridge with Oscar Randal-Williams, and a BSc in Mathematics at EPFL and the University of Toronto.


Summary:

Lance did pure math originally but became interested in the brain and AI. He started working with Karl Friston on the free energy principle, which claims all intelligent agents minimize free energy for perception, action, and decision-making. Lance has worked to provide mathematical foundations and proofs for why the free energy principle is true, starting from basic assumptions about agents interacting with their environment. This aims to justify the principle from first physics principles. Dr. Scarfe and Da Costa discuss different approaches to AI - the free energy/active inference approach focused on mimicking human intelligence vs approaches focused on maximizing capability like deep reinforcement learning. Lance argues active inference provides advantages for explainability and safety compared to black box AI systems. It provides a simple, sparse description of intelligence based on a generative model and free energy minimization. They discuss the need for structured learning and acquiring core knowledge to achieve more human-like intelligence. Lance highlights work from Josh Tenenbaum's lab that shows similar learning trajectories to humans in a simple Atari-like environment.

Incorporating core knowledge constraints the space of possible generative models the agent can use to represent the world, making learning more sample efficient. Lance argues active inference agents with core knowledge can match human learning capabilities.

They discuss how to make generative models interpretable, such as through factor graphs. The goal is to be able to understand the representations and message passing in the model that leads to decisions.

In summary, Lance argues active inference provides a principled approach to AI with advantages for explainability, safety, and human-like learning. Combining it with core knowledge and structural learning aims to achieve more human-like artificial intelligence.


https://www.lancelotdacosta.com/

https://twitter.com/lancelotdacosta


Interviewer: Dr. Tim Scarfe


TOC

00:00:00 - Start

00:09:27 - Intelligence

00:12:37 - Priors / structure learning

00:17:21 - Core knowledge

00:29:05 - Intelligence is specialised

00:33:21 - The magic of agents

00:39:30 - Intelligibility of structure learning


#artificialintelligence #activeinference

Nov 05, 202349:57
THE HARD PROBLEM OF OBSERVERS - WOLFRAM & FRISTON [SPECIAL EDITION]

THE HARD PROBLEM OF OBSERVERS - WOLFRAM & FRISTON [SPECIAL EDITION]

Please support us! https://www.patreon.com/mlst https://discord.gg/aNPkGUQtc5 https://twitter.com/MLStreetTalk


YT version (with intro not found here) https://youtu.be/6iaT-0Dvhnc This is the epic special edition show you have been waiting for! With two of the most brilliant scientists alive today. Atoms, things, agents, ... observers. What even defines an "observer" and what properties must all observers share? How do objects persist in our universe given that their material composition changes over time? What does it mean for a thing to be a thing? And do things supervene on our lower-level physical reality? What does it mean for a thing to have agency? What's the difference between a complex dynamical system with and without agency? Could a rock or an AI catflap have agency? Can the universe be factorised into distinct agents, or is agency diffused? Have you ever pondered about these deep questions about reality? Prof. Friston and Dr. Wolfram have spent their entire careers, some 40+ years each thinking long and hard about these very questions and have developed significant frameworks of reference on their respective journeys (the Wolfram Physics project and the Free Energy principle).

Panel: MIT Ph.D Keith Duggar Production: Dr. Tim Scarfe Refs: TED Talk with Stephen: https://www.ted.com/talks/stephen_wolfram_how_to_think_computationally_about_ai_the_universe_and_everything https://writings.stephenwolfram.com/2023/10/how-to-think-computationally-about-ai-the-universe-and-everything/ TOC 00:00:00 - Show kickoff

00:02:38 - Wolfram gets to grips with FEP

00:27:08 - How much control does an agent/observer have

00:34:52 - Observer persistence, what universe seems like to us

00:40:31 - Black holes

00:45:07 - Inside vs outside

00:52:20 - Moving away from the predictable path

00:55:26 - What can observers do

01:06:50 - Self modelling gives agency

01:11:26 - How do you know a thing has agency?

01:22:48 - Deep link between dynamics, ruliad and AI

01:25:52 - Does agency entail free will? Defining Agency

01:32:57 - Where do I probe for agency?

01:39:13 - Why is the universe the way we see it?

01:42:50 - Alien intelligence

01:43:40 - The hard problem of Observers

01:46:20 - Summary thoughts from Wolfram

01:49:35 - Factorisability of FEP

01:57:05 - Patreon interview teaser

Oct 29, 202301:59:29
DR. JEFF BECK - THE BAYESIAN BRAIN

DR. JEFF BECK - THE BAYESIAN BRAIN

Support us! https://www.patreon.com/mlst

MLST Discord: https://discord.gg/aNPkGUQtc5

YT version: https://www.youtube.com/watch?v=c4praCiy9qU


Dr. Jeff Beck is a computational neuroscientist studying probabilistic reasoning (decision making under uncertainty) in humans and animals with emphasis on neural representations of uncertainty and cortical implementations of probabilistic inference and learning. His line of research incorporates information theoretic and hierarchical statistical analysis of neural and behavioural data as well as reinforcement learning and active inference.


https://www.linkedin.com/in/jeff-beck...

https://scholar.google.com/citations?...


Interviewer: Dr. Tim Scarfe


TOC

00:00:00 Intro

00:00:51 Bayesian / Knowledge

00:14:57 Active inference

00:18:58 Mediation

00:23:44 Philosophy of mind / science

00:29:25 Optimisation

00:42:54 Emergence

00:56:38 Steering emergent systems

01:04:31 Work plan

01:06:06 Representations/Core knowledge


#activeinference

Oct 16, 202301:10:06
Prof. Melanie Mitchell 2.0 - AI Benchmarks are Broken!

Prof. Melanie Mitchell 2.0 - AI Benchmarks are Broken!

Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB Prof. Melanie Mitchell argues that the concept of "understanding" in AI is ill-defined and multidimensional - we can't simply say an AI system does or doesn't understand. She advocates for rigorously testing AI systems' capabilities using proper experimental methods from cognitive science. Popular benchmarks for intelligence often rely on the assumption that if a human can perform a task, an AI that performs the task must have human-like general intelligence. But benchmarks should evolve as capabilities improve. Large language models show surprising skill on many human tasks but lack common sense and fail at simple things young children can do. Their knowledge comes from statistical relationships in text, not grounded concepts about the world. We don't know if their internal representations actually align with human-like concepts. More granular testing focused on generalization is needed. There are open questions around whether large models' abilities constitute a fundamentally different non-human form of intelligence based on vast statistical correlations across text. Mitchell argues intelligence is situated, domain-specific and grounded in physical experience and evolution. The brain computes but in a specialized way honed by evolution for controlling the body. Extracting "pure" intelligence may not work. Other key points: - Need more focus on proper experimental method in AI research. Developmental psychology offers examples for rigorous testing of cognition. - Reporting instance-level failures rather than just aggregate accuracy can provide insights. - Scaling laws and complex systems science are an interesting area of complexity theory, with applications to understanding cities. - Concepts like "understanding" and "intelligence" in AI force refinement of fuzzy definitions. - Human intelligence may be more collective and social than we realize. AI forces us to rethink concepts we apply anthropomorphically. The overall emphasis is on rigorously building the science of machine cognition through proper experimentation and benchmarking as we assess emerging capabilities. TOC: [00:00:00] Introduction and Munk AI Risk Debate Highlights [05:00:00] Douglas Hofstadter on AI Risk [00:06:56] The Complexity of Defining Intelligence [00:11:20] Examining Understanding in AI Models [00:16:48] Melanie's Insights on AI Understanding Debate [00:22:23] Unveiling the Concept Arc [00:27:57] AI Goals: A Human vs Machine Perspective [00:31:10] Addressing the Extrapolation Challenge in AI [00:36:05] Brain Computation: The Human-AI Parallel [00:38:20] The Arc Challenge: Implications and Insights [00:43:20] The Need for Detailed AI Performance Reporting [00:44:31] Exploring Scaling in Complexity Theory Eratta: Note Tim said around 39 mins that a recent Stanford/DM paper modelling ARC “on GPT-4 got around 60%”. This is not correct and he misremembered. It was actually davinci3, and around 10%, which is still extremely good for a blank slate approach with an LLM and no ARC specific knowledge. Folks on our forum couldn’t reproduce the result. See paper linked below. Books (MUST READ): Artificial Intelligence: A Guide for Thinking Humans (Melanie Mitchell) https://www.amazon.co.uk/Artificial-Intelligence-Guide-Thinking-Humans/dp/B07YBHNM1C/?&_encoding=UTF8&tag=mlst00-21&linkCode=ur2&linkId=44ccac78973f47e59d745e94967c0f30&camp=1634&creative=6738 Complexity: A Guided Tour (Melanie Mitchell) https://www.amazon.co.uk/Audible-Complexity-A-Guided-Tour?&_encoding=UTF8&tag=mlst00-21&linkCode=ur2&linkId=3f8bd505d86865c50c02dd7f10b27c05&camp=1634&creative=6738


Show notes (transcript, full references etc)

https://atlantic-papyrus-d68.notion.site/Melanie-Mitchell-2-0-15e212560e8e445d8b0131712bad3000?pvs=25

YT version: https://youtu.be/29gkDpR2orc

Sep 10, 202301:01:48
Autopoitic Enactivism and the Free Energy Principle - Prof. Friston, Prof Buckley, Dr. Ramstead

Autopoitic Enactivism and the Free Energy Principle - Prof. Friston, Prof Buckley, Dr. Ramstead

We explore connections between FEP and enactivism, including tensions raised in a paper critiquing FEP from an enactivist perspective.


Dr. Maxwell Ramstead provides background on enactivism emerging from autopoiesis, with a focus on embodied cognition and rejecting information processing/computational views of mind.


Chris shares his journey from robotics into FEP, starting as a skeptic but becoming convinced it's the right framework. He notes there are both "high road" and "low road" versions, ranging from embodied to more radically anti-representational stances. He doesn't see a definitive fork between dynamical systems and information theory as the source of conflict. Rather, the notion of operational closure in enactivism seems to be the main sticking point.


The group explores definitional issues around structure/organization, boundaries, and operational closure. Maxwell argues the generative model in FEP captures organizational dependencies akin to operational closure. The Markov blanket formalism models structural interfaces.


We discuss the concept of goals in cognitive systems - Chris advocates an intentional stance perspective - using notions of goals/intentions if they help explain system dynamics. Goals emerge from beliefs about dynamical trajectories. Prof Friston provides an elegant explanation of how goal-directed behavior naturally falls out of the FEP mathematics in a particular "goldilocks" regime of system scale/dynamics. The conversation explores the idea that many systems simply act "as if" they have goals or models, without necessarily possessing explicit representations. This helps resolve tensions between enactivist and computational perspectives.


Throughout the dialogue, Maxwell presses philosophical points about the FEP abolishing what he perceives as false dichotomies in cognitive science such as internalism/externalism. He is critical of enactivists' commitment to bright line divides between subject areas.


Prof. Karl Friston - Inventor of the free energy principle https://scholar.google.com/citations?user=q_4u0aoAAAAJ

Prof. Chris Buckley - Professor of Neural Computation at Sussex University https://scholar.google.co.uk/citations?user=nWuZ0XcAAAAJ&hl=en

Dr. Maxwell Ramstead - Director of Research at VERSES https://scholar.google.ca/citations?user=ILpGOMkAAAAJ&hl=fr


We address critique in this paper:

Laying down a forking path: Tensions between enaction and the free energy principle (Ezequiel A. Di Paolo, Evan Thompson, Randall D. Beere)

https://philosophymindscience.org/index.php/phimisci/article/download/9187/8975


Other refs:

Multiscale integration: beyond internalism and externalism (Maxwell J D Ramstead)

https://pubmed.ncbi.nlm.nih.gov/33627890/


MLST panel: Dr. Tim Scarfe and Dr. Keith Duggar


TOC (auto generated): 0:00 - Introduction 0:41 - Defining enactivism and its variants 6:58 - The source of the conflict between dynamical systems and information theory 8:56 - Operational closure in enactivism 10:03 - Goals and intentions 12:35 - The link between dynamical systems and information theory 15:02 - Path integrals and non-equilibrium dynamics 18:38 - Operational closure defined 21:52 - Structure vs. organization in enactivism 24:24 - Markov blankets as interfaces 28:48 - Operational closure in FEP 30:28 - Structure and organization again 31:08 - Dynamics vs. information theory 33:55 - Goals and intentions emerge in the FEP mathematics 36:58 - The Good Regulator Theorem 49:30 - enactivism and its relation to ecological psychology 52:00 - Goals, intentions and beliefs 55:21 - Boundaries and meaning 58:55 - Enactivism's rejection of information theory 1:02:08 - Beliefs vs goals 1:05:06 - Ecological psychology and FEP 1:08:41 - The Good Regulator Theorem 1:18:38 - How goal-directed behavior emerges 1:23:13 - Ontological vs metaphysical boundaries 1:25:20 - Boundaries as maps 1:31:08 - Connections to the maximum entropy principle 1:33:45 - Relations to quantum and relational physics

Sep 05, 202301:34:46
STEPHEN WOLFRAM 2.0 - Resolving the Mystery of the Second Law of Thermodynamics

STEPHEN WOLFRAM 2.0 - Resolving the Mystery of the Second Law of Thermodynamics

Please check out Numerai - our sponsor @ http://numer.ai/mlst Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB The Second Law: Resolving the Mystery of the Second Law of Thermodynamics Buy Stephen's book here - https://tinyurl.com/2jj2t9wa The Language Game: How Improvisation Created Language and Changed the World by Morten H. Christiansen and Nick Chater Buy here: https://tinyurl.com/35bvs8be Stephen Wolfram starts by discussing the second law of thermodynamics - the idea that entropy, or disorder, tends to increase over time. He talks about how this law seems intuitively true, but has been difficult to prove. Wolfram outlines his decades-long quest to fully understand the second law, including failed early attempts to simulate particles mixing as a 12-year-old. He explains how irreversibility arises from the computational irreducibility of underlying physical processes coupled with our limited ability as observers to do the computations needed to "decrypt" the microscopic details. The conversation then shifts to discussing language and how concepts allow us to communicate shared ideas between minds positioned in different parts of "rule space." Wolfram talks about the successes and limitations of using large language models to generate Wolfram Language code from natural language prompts. He sees it as a useful tool for getting started programming, but one still needs human refinement. The final part of the conversation focuses on AI safety and governance. Wolfram notes uncontrolled actuation is where things can go wrong with AI systems. He discusses whether AI agents could have intrinsic experiences and goals, how we might build trust networks between AIs, and that managing a system of many AIs may be easier than a single AI. Wolfram emphasizes the need for more philosophical depth in thinking about AI aims, and draws connections between potential solutions and his work on computational irreducibility and physics. Show notes: https://docs.google.com/document/d/1hXNHtvv8KDR7PxCfMh9xOiDFhU3SVDW8ijyxeTq9LHo/edit?usp=sharing Pod version: TBA https://twitter.com/stephen_wolfram TOC: 00:00:00 - Introduction 00:02:34 - Second law book 00:14:01 - Reversibility / entropy / observers / equivalence 00:34:22 - Concepts/language in the ruliad 00:49:04 - Comparison to free energy principle 00:53:58 - ChatGPT / Wolfram / Language 01:00:17 - AI risk Panel: Dr. Tim Scarfe @ecsquendor / Dr. Keith Duggar @DoctorDuggar

Aug 15, 202301:24:07
Prof. Jürgen Schmidhuber - FATHER OF AI ON ITS DANGERS

Prof. Jürgen Schmidhuber - FATHER OF AI ON ITS DANGERS

Please check out Numerai - our sponsor @ http://numer.ai/mlst Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB Professor Jürgen Schmidhuber, the father of artificial intelligence, joins us today. Schmidhuber discussed the history of machine learning, the current state of AI, and his career researching recursive self-improvement, artificial general intelligence and its risks. Schmidhuber pointed out the importance of studying the history of machine learning to properly assign credit for key breakthroughs. He discussed some of the earliest machine learning algorithms. He also highlighted the foundational work of Leibniz, who discovered the chain rule that enables training of deep neural networks, and the ancient Antikythera mechanism, the first known gear-based computer. Schmidhuber discussed limits to recursive self-improvement and artificial general intelligence, including physical constraints like the speed of light and what can be computed. He noted we have no evidence the human brain can do more than traditional computing. Schmidhuber sees humankind as a potential stepping stone to more advanced, spacefaring machine life which may have little interest in humanity. However, he believes commercial incentives point AGI development towards being beneficial and that open-source innovation can help to achieve "AI for all" symbolised by his company's motto "AI∀". Schmidhuber discussed approaches he believes will lead to more general AI, including meta-learning, reinforcement learning, building predictive world models, and curiosity-driven learning. His "fast weight programming" approach from the 1990s involved one network altering another network's connections. This was actually the first Transformer variant, now called an unnormalised linear Transformer. He also described the first GANs in 1990, to implement artificial curiosity. Schmidhuber reflected on his career researching AI. He said his fondest memories were gaining insights that seemed to solve longstanding problems, though new challenges always arose: "then for a brief moment it looks like the greatest thing since sliced bread and and then you get excited ... but then suddenly you realize, oh, it's still not finished. Something important is missing.” Since 1985 he has worked on systems that can recursively improve themselves, constrained only by the limits of physics and computability. He believes continual progress, shaped by both competition and collaboration, will lead to increasingly advanced AI. On AI Risk: Schmidhuber: "To me it's indeed weird. Now there are all these letters coming out warning of the dangers of AI. And I think some of the guys who are writing these letters, they are just seeking attention because they know that AI dystopia are attracting more attention than documentaries about the benefits of AI in healthcare." Schmidhuber believes we should be more concerned with existing threats like nuclear weapons than speculative risks from advanced AI. He said: "As far as I can judge, all of this cannot be stopped but it can be channeled in a very natural way that is good for humankind...there is a tremendous bias towards good AI, meaning AI that is good for humans...I am much more worried about 60 year old technology that can wipe out civilization within two hours, without any AI.”

[this is truncated, read show notes]

YT: https://youtu.be/q27XMPm5wg8

Show notes: https://docs.google.com/document/d/13-vIetOvhceZq5XZnELRbaazpQbxLbf5Yi7M25CixEE/edit?usp=sharing Note: Interview was recorded 15th June 2023. https://twitter.com/SchmidhuberAI Panel: Dr. Tim Scarfe @ecsquendor / Dr. Keith Duggar @DoctorDuggar Pod version: TBA TOC: [00:00:00] Intro / Numerai [00:00:51] Show Kick Off [00:02:24] Credit Assignment in ML [00:12:51] XRisk [00:20:45] First Transformer variant of 1991 [00:47:20] Which Current Approaches are Good [00:52:42] Autonomy / Curiosity [00:58:42] GANs of 1990 [01:11:29] OpenAI, Moats, Legislation

Aug 14, 202301:21:04
Can We Develop Truly Beneficial AI? George Hotz and Connor Leahy

Can We Develop Truly Beneficial AI? George Hotz and Connor Leahy

Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB


George Hotz and Connor Leahy discuss the crucial challenge of developing beneficial AI that is aligned with human values. Hotz believes truly aligned AI is impossible, while Leahy argues it's a solvable technical challenge.
Hotz contends that AI will inevitably pursue power, but distributing AI widely would prevent any single AI from dominating. He advocates open-sourcing AI developments to democratize access. Leahy counters that alignment is necessary to ensure AIs respect human values. Without solving alignment, general AI could ignore or harm humans.
They discuss whether AI's tendency to seek power stems from optimization pressure or human-instilled goals. Leahy argues goal-seeking behavior naturally emerges while Hotz believes it reflects human values. Though agreeing on AI's potential dangers, they differ on solutions. Hotz favors accelerating AI progress and distributing capabilities while Leahy wants safeguards put in place.
While acknowledging risks like AI-enabled weapons, they debate whether broad access or restrictions better manage threats. Leahy suggests limiting dangerous knowledge, but Hotz insists openness checks government overreach. They concur that coordination and balance of power are key to navigating the AI revolution. Both eagerly anticipate seeing whose ideas prevail as AI progresses.

Transcript and notes: https://docs.google.com/document/d/1smkmBY7YqcrhejdbqJOoZHq-59LZVwu-DNdM57IgFcU/edit?usp=sharing

Note: this is not a normal episode i.e. the hosts are not part of the debate (and for the record don't agree with Connor or George).

TOC: [00:00:00] Introduction to George Hotz and Connor Leahy [00:03:10] George Hotz's Opening Statement: Intelligence and Power [00:08:50] Connor Leahy's Opening Statement: Technical Problem of Alignment and Coordination [00:15:18] George Hotz's Response: Nature of Cooperation and Individual Sovereignty [00:17:32] Discussion on individual sovereignty and defense [00:18:45] Debate on living conditions in America versus Somalia [00:21:57] Talk on the nature of freedom and the aesthetics of life [00:24:02] Discussion on the implications of coordination and conflict in politics [00:33:41] Views on the speed of AI development / hard takeoff [00:35:17] Discussion on potential dangers of AI [00:36:44] Discussion on the effectiveness of current AI [00:40:59] Exploration of potential risks in technology [00:45:01] Discussion on memetic mutation risk [00:52:36] AI alignment and exploitability [00:53:13] Superintelligent AIs and the assumption of good intentions [00:54:52] Humanity’s inconsistency and AI alignment [00:57:57] Stability of the world and the impact of superintelligent AIs [01:02:30] Personal utopia and the limitations of AI alignment [01:05:10] Proposed regulation on limiting the total number of flops [01:06:20] Having access to a powerful AI system [01:18:00] Power dynamics and coordination issues with AI [01:25:44] Humans vs AI in Optimization [01:27:05] The Impact of AI's Power Seeking Behavior [01:29:32] A Debate on the Future of AI

Aug 04, 202301:29:59
Dr. MAXWELL RAMSTEAD - The Physics of Survival

Dr. MAXWELL RAMSTEAD - The Physics of Survival

Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB Join us for a fascinating discussion of the free energy principle with Dr. Maxwell Ramsted, a leading thinker exploring the intersection of math, physics, and philosophy and Director of Research at VERSES. The FEP was proposed by renowned neuroscientist Karl Friston, this principle offers a unifying theory explaining how systems maintain order and their identity. The free energy principle inverts traditional survival logic. Rather than asking what behaviors promote survival, it queries - given things exist, what must they do? The answer: minimizing free energy, or "surprise." Systems persist by constantly ensuring their internal states match anticipated states based on a model of the world. Failure to minimize surprise leads to chaos as systems dissolve into disorder. Thus, the free energy principle elucidates why lifeforms relentlessly model and predict their surroundings. It is an existential imperative counterbalancing entropy. Essentially, this principle describes the mind's pursuit of harmony between expectations and reality. Its relevance spans from cells to societies, underlying order wherever longevity is found. Our discussion explores the technical details and philosophical implications of this paradigm-shifting theory. How does it further our understanding of cognition and intelligence? What insights does it offer about the fundamental patterns and properties of existence? Can it precipitate breakthroughs in disciplines like neuroscience and artificial intelligence? Dr. Ramstead completed his Ph.D. at McGill University in Montreal, Canada in 2019, with frequent research visits to UCL in London, under the supervision of the world’s most cited neuroscientist, Professor Karl Friston (UCL).


YT version: https://youtu.be/8qb28P7ksyE https://scholar.google.ca/citations?user=ILpGOMkAAAAJ&hl=frhttps://spatialwebfoundation.org/team/maxwell-ramstead/https://www.linkedin.com/in/maxwell-ramstead-43a1991b7/https://twitter.com/mjdramstead VERSES AI: https://www.verses.ai/ Intro: Tim Scarfe (Ph.D) Interviewer: Keith Duggar (Ph.D MIT) TOC: 0:00:00 - Tim Intro 0:08:10 - Intro and philosophy 0:14:26 - Intro to Maxwell 0:18:00 - FEP 0:29:08 - Markov Blankets 0:51:15 - Verses AI / Applications of FEP 1:05:55 - Potential issues with deploying FEP 1:10:50 - Shared knowledge graphs 1:14:29 - XRisk / Ethics 1:24:57 - Strength of Verses 1:28:30 - Misconceptions about FEP, Physics vs philosophy/criticism 1:44:41 - Emergence / consciousness References: Principia Mathematica https://www.abebooks.co.uk/servlet/BookDetailsPL?bi=30567249049 Andy Clark's paper "Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science" (Behavioral and Brain Sciences, 2013) https://pubmed.ncbi.nlm.nih.gov/23663408/ "Math Does Not Represent" by Erik Curiel https://www.youtube.com/watch?v=aA_T20HAzyY A free energy principle for generic quantum systems (Chris Fields et al) https://arxiv.org/pdf/2112.15242.pdf Designing explainable artificial intelligence with active inference https://arxiv.org/abs/2306.04025 Am I Self-Conscious? (Friston) https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00579/full The Meta-Problem of Consciousness https://philarchive.org/archive/CHATMO-32v1 The Map-Territory Fallacy Fallacy https://arxiv.org/abs/2208.06924 A Technical Critique of Some Parts of the Free Energy Principle - Martin Biehl et al https://arxiv.org/abs/2001.06408 WEAK MARKOV BLANKETS IN HIGH-DIMENSIONAL, SPARSELY-COUPLED RANDOM DYNAMICAL SYSTEMS - DALTON A R SAKTHIVADIVEL https://arxiv.org/pdf/2207.07620.pdf

Jul 16, 202302:05:51
MUNK DEBATE ON AI (COMMENTARY) [DAVID FOSTER]

MUNK DEBATE ON AI (COMMENTARY) [DAVID FOSTER]

Patreon: https://www.patreon.com/mlst

Discord: https://discord.gg/ESrGqhf5CB


The discussion between Tim Scarfe and David Foster provided an in-depth critique of the arguments made by panelists at the Munk AI Debate on whether artificial intelligence poses an existential threat to humanity. While the panelists made thought-provoking points, Scarfe and Foster found their arguments largely speculative, lacking crucial details and evidence to support claims of an impending existential threat.


Scarfe and Foster strongly disagreed with Max Tegmark’s position that AI has an unparalleled “blast radius” that could lead to human extinction. Tegmark failed to provide a credible mechanism for how this scenario would unfold in reality. His arguments relied more on speculation about advanced future technologies than on present capabilities and trends. As Foster argued, we cannot conclude AI poses a threat based on speculation alone. Evidence is needed to ground discussions of existential risks in science rather than science fiction fantasies or doomsday scenarios.


They found Yann LeCun’s statements too broad and high-level, critiquing him for not providing sufficiently strong arguments or specifics to back his position. While LeCun aptly noted AI remains narrow in scope and far from achieving human-level intelligence, his arguments lacked crucial details on current limitations and why we should not fear superintelligence emerging in the near future. As Scarfe argued, without these details the discussion descended into “philosophy” rather than focusing on evidence and data.


Scarfe and Foster also took issue with Yoshua Bengio’s unsubstantiated speculation that machines would necessarily develop a desire for self-preservation that threatens humanity. There is no evidence today’s AI systems are developing human-like general intelligence or desires, let alone that these attributes would manifest in ways dangerous to humans. The question is not whether machines will eventually surpass human intelligence, but how and when this might realistically unfold based on present technological capabilities. Bengio’s arguments relied more on speculation about advanced future technologies than on evidence from current systems and research.


In contrast, they strongly agreed with Melanie Mitchell’s view that scenarios of malevolent or misguided superintelligence are speculation, not backed by evidence from AI as it exists today. Claims of an impending “existential threat” from AI are overblown, harmful to progress, and inspire undue fear of technology rather than consideration of its benefits. Mitchell sensibly argued discussions of risks from emerging technologies must be grounded in science and data, not speculation, if we are to make balanced policy and development decisions.


Overall, while the debate raised thought-provoking questions about advanced technologies that could eventually transform our world, none of the speakers made a credible evidence-based case that today’s AI poses an existential threat. Scarfe and Foster argued the debate failed to discuss concrete details about current capabilities and limitations of technologies like language models, which remain narrow in scope. General human-level AI is still missing many components, including physical embodiment, emotions, and the "common sense" reasoning that underlies human thinking. Claims of existential threats require extraordinary evidence to justify policy or research restrictions, not speculation. By discussing possibilities rather than probabilities grounded in evidence, the debate failed to substantively advance our thinking on risks from AI and its plausible development in the coming decades.


David's new podcast: https://podcasts.apple.com/us/podcast/the-ai-canvas/id1692538973

Generative AI book: https://www.oreilly.com/library/view/generative-deep-learning/9781098134174/

Jul 02, 202302:08:15
[SPONSORED] The Digitized Self: AI, Identity and the Human Psyche (YouAi)

[SPONSORED] The Digitized Self: AI, Identity and the Human Psyche (YouAi)

Sponsored Episode - YouAi What if an AI truly knew you—your thoughts, values, aptitudes, and dreams? An AI that could enhance your life in profound ways by amplifying your strengths, augmenting your weaknesses, and connecting you with like-minded souls. That is the vision of YouAi. YouAi founder Dmitri Shapiro believes digitizing our inner lives could unlock tremendous benefits. But mapping the human psyche also poses deep questions. As technology mediates our self-understanding, what risks rendering our minds in bits and algorithms? Could we gain a new means of flourishing or lose something intangible? There are no easy answers, but YouAi offers a vision balanced by hard thinking. Shapiro discussed YouAi's app, which builds personalized AI assistants by learning how individuals think through interactive questions. As people share, YouAi develops a multidimensional model of their mind. Users get a tailored feed of prompts to continue engaging and teaching their AI. YouAi's vision provides a glimpse into a future that could unsettle or fulfill our hopes. As technology mediates understanding ourselves and others, will we risk losing what makes us human or find new means of flourishing? YouAI believes that together, we can build a future where our minds contain infinite potential—and their technology helps unlock it. But we must proceed thoughtfully, upholding human dignity above all else. Our minds shape who we are. And who we can become.Digitise your mind today: YouAi - https://YouAi.aiMIndStudio – https://YouAi.ai/mindstudioYouAi Mind Indexer - https://YouAi.ai/trainJoin the MLST discord and register for the YouAi event on July 13th: https://discord.gg/ESrGqhf5CB TOC: 0:00:00 - Introduction to Mind Digitization 0:09:31 - The YouAi Platform and Personal Applications 0:27:54 - The Potential of Group Alignment 0:30:28 - Applications in Human-to-Human Communication 0:35:43 - Applications in Interfacing with Digital Technology 0:43:41 - Introduction to the Project 0:44:51 - Brain digitization and mind vs. brain 0:49:55 - The Extended Mind and Neurofeedback 0:54:16 - Personalized Learning and the Future of Education 1:02:19 - Privacy and Data Security 1:14:20 - Ethical Considerations of Digitizing the Mind 1:19:49 - The Metaverse and the Future of Digital Identity 1:25:17 - Digital Immortality and Legacy 1:29:09 - The Nature of Consciousness 1:34:11 - Digitization of the Mind 1:35:06 - Potential Inequality in a Digital World 1:38:00 - The Role of Technology in Equalizing or Democratizing Society 1:40:51 - The Future of the Startup and Community Involvement

Jun 29, 202301:46:00
Joscha Bach and Connor Leahy on AI risk

Joscha Bach and Connor Leahy on AI risk

Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Twitter: https://twitter.com/MLStreetTalk The first 10 mins of audio from Joscha isn't great, it improves after.

Transcript and longer summary: https://docs.google.com/document/d/1TUJhlSVbrHf2vWoe6p7xL5tlTK_BGZ140QqqTudF8UI/edit?usp=sharing Dr. Joscha Bach argued that general intelligence emerges from civilization, not individuals. Given our biological constraints, humans cannot achieve a high level of general intelligence on our own. Bach believes AGI may become integrated into all parts of the world, including human minds and bodies. He thinks a future where humans and AGI harmoniously coexist is possible if we develop a shared purpose and incentive to align. However, Bach is uncertain about how AI progress will unfold or which scenarios are most likely. Bach argued that global control and regulation of AI is unrealistic. While regulation may address some concerns, it cannot stop continued progress in AI. He believes individuals determine their own values, so "human values" cannot be formally specified and aligned across humanity. For Bach, the possibility of building beneficial AGI is exciting but much work is still needed to ensure a positive outcome. Connor Leahy believes we have more control over the future than the default outcome might suggest. With sufficient time and effort, humanity could develop the technology and coordination to build a beneficial AGI. However, the default outcome likely leads to an undesirable scenario if we do not actively work to build a better future. Leahy thinks finding values and priorities most humans endorse could help align AI, even if individuals disagree on some values. Leahy argued a future where humans and AGI harmoniously coexist is ideal but will require substantial work to achieve. While regulation faces challenges, it remains worth exploring. Leahy believes limits to progress in AI exist but we are unlikely to reach them before humanity is at risk. He worries even modestly superhuman intelligence could disrupt the status quo if misaligned with human values and priorities. Overall, Bach and Leahy expressed optimism about the possibility of building beneficial AGI but believe we must address risks and challenges proactively. They agreed substantial uncertainty remains around how AI will progress and what scenarios are most plausible. But developing a shared purpose between humans and AI, improving coordination and control, and finding human values to help guide progress could all improve the odds of a beneficial outcome. With openness to new ideas and willingness to consider multiple perspectives, continued discussions like this one could help ensure the future of AI is one that benefits and inspires humanity. TOC: 00:00:00 - Introduction and Background 00:02:54 - Different Perspectives on AGI 00:13:59 - The Importance of AGI 00:23:24 - Existential Risks and the Future of Humanity 00:36:21 - Coherence and Coordination in Society 00:40:53 - Possibilities and Future of AGI 00:44:08 - Coherence and alignment 01:08:32 - The role of values in AI alignment 01:18:33 - The future of AGI and merging with AI 01:22:14 - The limits of AI alignment 01:23:06 - The scalability of intelligence 01:26:15 - Closing statements and future prospects

Jun 20, 202301:31:28
Neel Nanda - Mechanistic Interpretability

Neel Nanda - Mechanistic Interpretability

In this wide-ranging conversation, Tim Scarfe interviews Neel Nanda, a researcher at DeepMind working on mechanistic interpretability, which aims to understand the algorithms and representations learned by machine learning models. Neel discusses how models can represent their thoughts using motifs, circuits, and linear directional features which are often communicated via a "residual stream", an information highway models use to pass information between layers.

Neel argues that "superposition", the ability for models to represent more features than they have neurons, is one of the biggest open problems in interpretability. This is because superposition thwarts our ability to understand models by decomposing them into individual units of analysis. Despite this, Neel remains optimistic that ambitious interpretability is possible, citing examples like his work reverse engineering how models do modular addition. However, Neel notes we must start small, build rigorous foundations, and not assume our theoretical frameworks perfectly match reality.

The conversation turns to whether models can have goals or agency, with Neel arguing they likely can based on heuristics like models executing long term plans towards some objective. However, we currently lack techniques to build models with specific goals, meaning any goals would likely be learned or emergent. Neel highlights how induction heads, circuits models use to track long range dependencies, seem crucial for phenomena like in-context learning to emerge.

On the existential risks from AI, Neel believes we should avoid overly confident claims that models will or will not be dangerous, as we do not understand them enough to make confident theoretical assertions. However, models could pose risks through being misused, having undesirable emergent properties, or being imperfectly aligned. Neel argues we must pursue rigorous empirical work to better understand and ensure model safety, avoid "philosophizing" about definitions of intelligence, and focus on ensuring researchers have standards for what it means to decide a system is "safe" before deploying it. Overall, a thoughtful conversation on one of the most important issues of our time.


Support us! https://www.patreon.com/mlst

MLST Discord: https://discord.gg/aNPkGUQtc5

Twitter: https://twitter.com/MLStreetTalk


Neel Nanda: https://www.neelnanda.io/


TOC

[00:00:00] Introduction and Neel Nanda's Interests (walk and talk)

[00:03:15] Mechanistic Interpretability: Reverse Engineering Neural Networks

[00:13:23] Discord questions

[00:21:16] Main interview kick-off in studio

[00:49:26] Grokking and Sudden Generalization

[00:53:18] The Debate on Systematicity and Compositionality

[01:19:16] How do ML models represent their thoughts

[01:25:51] Do Large Language Models Learn World Models?

[01:53:36] Superposition and Interference in Language Models

[02:43:15] Transformers discussion

[02:49:49] Emergence and In-Context Learning

[03:20:02] Superintelligence/XRisk discussion


Transcript: https://docs.google.com/document/d/1FK1OepdJMrqpFK-_1Q3LQN6QLyLBvBwWW_5z8WrS1RI/edit?usp=sharing

Refs: https://docs.google.com/document/d/115dAroX0PzSduKr5F1V4CWggYcqIoSXYBhcxYktCnqY/edit?usp=sharing


Jun 18, 202304:10:00
Prof. Daniel Dennett - Could AI Counterfeit People Destroy Civilization? (SPECIAL EDITION)

Prof. Daniel Dennett - Could AI Counterfeit People Destroy Civilization? (SPECIAL EDITION)

Please check out Numerai - our sponsor using our link @

http://numer.ai/mlst


Numerai is a groundbreaking platform which is taking the data science world by storm. Tim has been using Numerai to build state-of-the-art models which predict the stock market, all while being a part of an inspiring community of data scientists from around the globe. They host the Numerai Data Science Tournament, where data scientists like us use their financial dataset to predict future stock market performance.


Support us! https://www.patreon.com/mlst

MLST Discord: https://discord.gg/aNPkGUQtc5

Twitter: https://twitter.com/MLStreetTalk

YT version: https://youtu.be/axJtywd9Tbo


In this fascinating interview, Dr. Tim Scarfe speaks with renowned philosopher Daniel Dennett about the potential dangers of AI and the concept of "Counterfeit People." Dennett raises concerns about AI being used to create artificial colleagues, and argues that preventing counterfeit AI individuals is crucial for societal trust and security.


They delve into Dennett's "Two Black Boxes" thought experiment, the Chinese Room Argument by John Searle, and discuss the implications of AI in terms of reversibility, reontologisation, and realism. Dr. Scarfe and Dennett also examine adversarial LLMs, mental trajectories, and the emergence of consciousness and semanticity in AI systems.


Throughout the conversation, they touch upon various philosophical perspectives, including Gilbert Ryle's Ghost in the Machine, Chomsky's work, and the importance of competition in academia. Dennett concludes by highlighting the need for legal and technological barriers to protect against the dangers of counterfeit AI creations.


Join Dr. Tim Scarfe and Daniel Dennett in this thought-provoking discussion about the future of AI and the potential challenges we face in preserving our civilization. Don't miss this insightful conversation!


TOC:

00:00:00 Intro

00:09:56 Main show kick off

00:12:04 Counterfeit People

00:16:03 Reversibility

00:20:55 Reontologisation

00:24:43 Realism

00:27:48 Adversarial LLMs are out to get us

00:32:34 Exploring mental trajectories and Chomsky

00:38:53 Gilbert Ryle and Ghost in machine and competition in academia

00:44:32 2 Black boxes thought experiment / intentional stance

01:00:11 Chinese room

01:04:49 Singularitarianism

01:07:22 Emergence of consciousness and semanticity


References:


Tree of Thoughts: Deliberate Problem Solving with Large Language Models

https://arxiv.org/abs/2305.10601


The Problem With Counterfeit People (Daniel Dennett)

https://www.theatlantic.com/technology/archive/2023/05/problem-counterfeit-people/674075/


The knowledge argument

https://en.wikipedia.org/wiki/Knowledge_argument


The Intentional Stance

https://www.researchgate.net/publication/271180035_The_Intentional_Stance


Two Black Boxes: a Fable (Daniel Dennett)

https://www.researchgate.net/publication/28762339_Two_Black_Boxes_a_Fable


The Chinese Room Argument (John Searle)

https://plato.stanford.edu/entries/chinese-room/

https://web-archive.southampton.ac.uk/cogprints.org/7150/1/10.1.1.83.5248.pdf


From Bacteria to Bach and Back: The Evolution of Minds (Daniel Dennett)

https://www.amazon.co.uk/Bacteria-Bach-Back-Evolution-Minds/dp/014197804X


Consciousness Explained (Daniel Dennett)

https://www.amazon.co.uk/Consciousness-Explained-Penguin-Science-Dennett/dp/0140128670/


The Mind's I: Fantasies and Reflections on Self and Soul (Hofstadter, Douglas R; Dennett, Daniel C.)

https://www.abebooks.co.uk/servlet/BookDetailsPL?bi=31494476184


#DanielDennett #ArtificialIntelligence #CounterfeitPeople

Jun 04, 202301:14:42
Decoding the Genome: Unraveling the Complexities with AI and Creativity [Prof. Jim Hughes, Oxford]

Decoding the Genome: Unraveling the Complexities with AI and Creativity [Prof. Jim Hughes, Oxford]

Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Twitter: https://twitter.com/MLStreetTalk In this eye-opening discussion between Tim Scarfe and Prof. Jim Hughes, a professor of gene regulation at Oxford University, they explore the intersection of creativity, genomics, and artificial intelligence. Prof. Hughes brings his expertise in genomics and insights from his interdisciplinary research group, which includes machine learning experts, mathematicians, and molecular biologists. The conversation begins with an overview of Prof. Hughes' background and the importance of creativity in scientific research. They delve into the challenges of unlocking the secrets of the human genome and how machine learning, specifically convolutional neural networks, can assist in decoding genome function. As they discuss validation and interpretability concerns in machine learning, they acknowledge the need for experimental tests and ponder the complex nature of understanding the basic code of life. They touch upon the fascinating world of morphogenesis and emergence, considering the potential crossovers into AI and their implications for self-repairing systems in medicine. Examining the ethical and regulatory aspects of genomics and AI, the duo explores the implications of having access to someone's genome, the potential to predict traits or diseases, and the role of AI in understanding complex genetic signals. They also consider the challenges of keeping up with the rapidly expanding body of scientific research and the pressures faced by researchers in academia. To wrap up the discussion, Tim and Prof. Hughes shed light on the significance of creativity and diversity in scientific research, emphasizing the need for divergent processes and diverse perspectives to foster innovation and avoid consensus-driven convergence. Filmed at https://www.creativemachine.io/Prof. Jim Hughes: https://www.rdm.ox.ac.uk/people/jim-hughesDr. Tim Scarfe: https://xrai.glass/ Table of Contents: 1. [0:00:00] Introduction and Prof. Jim Hughes' background 2. [0:02:48] Creativity and its role in science 3. [0:07:13] Challenges in understanding the human genome 4. [0:13:20] Using convolutional neural networks to decode genome function 5. [0:15:32] Validation and interpretability concerns in machine learning 6. [0:17:56] Challenges in understanding the basic code of life 7. [0:19:36] Morphogenesis, emergence, and potential crossovers into AI 8. [0:21:38] Ethics and regulation in genomics and AI 9. [0:23:30] The role of AI in understanding and managing genetic risks 10. [0:32:37] Creativity and diversity in scientific research

May 31, 202342:57
ROBERT MILES - "There is a good chance this kills everyone"

ROBERT MILES - "There is a good chance this kills everyone"

Please check out Numerai - our sponsor @

https://numerai.com/mlst


Numerai is a groundbreaking platform which is taking the data science world by storm. Tim has been using Numerai to build state-of-the-art models which predict the stock market, all while being a part of an inspiring community of data scientists from around the globe. They host the Numerai Data Science Tournament, where data scientists like us use their financial dataset to predict future stock market performance.


Support us! https://www.patreon.com/mlst

MLST Discord: https://discord.gg/aNPkGUQtc5

Twitter: https://twitter.com/MLStreetTalk


Welcome to an exciting episode featuring an outstanding guest, Robert Miles! Renowned for his extraordinary contributions to understanding AI and its potential impacts on our lives, Robert is an artificial intelligence advocate, researcher, and YouTube sensation. He combines engaging discussions with entertaining content, captivating millions of viewers from around the world.

With a strong computer science background, Robert has been actively involved in AI safety projects, focusing on raising awareness about potential risks and benefits of advanced AI systems. His YouTube channel is celebrated for making AI safety discussions accessible to a diverse audience through breaking down complex topics into easy-to-understand nuggets of knowledge, and you might also recognise him from his appearances on Computerphile.

In this episode, join us as we dive deep into Robert's journey in the world of AI, exploring his insights on AI alignment, superintelligence, and the role of AI shaping our society and future. We'll discuss topics such as the limits of AI capabilities and physics, AI progress and timelines, human-machine hybrid intelligence, AI in conflict and cooperation with humans, and the convergence of AI communities.


Robert Miles:

@RobertMilesAI

https://twitter.com/robertskmiles

https://aisafety.info/


YT version: https://www.youtube.com/watch?v=kMLKbhY0ji0


Panel:

Dr. Tim Scarfe

Dr. Keith Duggar

Joint CTOs - https://xrai.glass/


Refs:

Are Emergent Abilities of Large Language Models a Mirage? (Rylan Schaeffer)

https://arxiv.org/abs/2304.15004


TOC:

Intro [00:00:00]

Numerai Sponsor Messsage [00:02:17]

AI Alignment [00:04:27]

Limits of AI Capabilities and Physics [00:18:00]

AI Progress and Timelines [00:23:52]

AI Arms Race and Innovation [00:31:11]

Human-Machine Hybrid Intelligence [00:38:30]

Understanding and Defining Intelligence [00:42:48]

AI in Conflict and Cooperation with Humans [00:50:13]

Interpretability and Mind Reading in AI [01:03:46]

Mechanistic Interpretability and Deconfusion Research [01:05:53]

Understanding the core concepts of AI [01:07:40]

Moon landing analogy and AI alignment [01:09:42]

Cognitive horizon and limits of human intelligence [01:11:42]

Funding and focus on AI alignment [01:16:18]

Regulating AI technology and potential risks [01:19:17]

Aligning AI with human values and its dynamic nature [01:27:04]

Cooperation and Allyship [01:29:33]

Orthogonality Thesis and Goal Preservation [01:33:15]

Anthropomorphic Language and Intelligent Agents [01:35:31]

Maintaining Variety and Open-ended Existence [01:36:27]

Emergent Abilities of Large Language Models [01:39:22]

Convergence vs Emergence [01:44:04]

Criticism of X-risk and Alignment Communities [01:49:40]

Fusion of AI communities and addressing biases [01:52:51]

AI systems integration into society and understanding them [01:53:29]

Changing opinions on AI topics and learning from past videos [01:54:23]

Utility functions and von Neumann-Morgenstern theorems [01:54:47]

AI Safety FAQ project [01:58:06]

Building a conversation agent using AI safety dataset [02:00:36]

May 21, 202302:01:54
AI Senate Hearing - Executive Summary (Sam Altman, Gary Marcus)

AI Senate Hearing - Executive Summary (Sam Altman, Gary Marcus)

Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Twitter: https://twitter.com/MLStreetTalk


In a historic and candid Senate hearing, OpenAI CEO Sam Altman, Professor Gary Marcus, and IBM's Christina Montgomery discussed the regulatory landscape of AI in the US. The discussion was particularly interesting due to its timing, as it followed the recent release of the EU's proposed AI Act, which could potentially ban American companies like OpenAI and Google from providing API access to generative AI models and impose massive fines for non-compliance.


The speakers openly addressed potential risks of AI technology and emphasized the need for precision regulation. This was a unique approach, as historically, US companies have tried their hardest to avoid regulation. The hearing not only showcased the willingness of industry leaders to engage in discussions on regulation but also demonstrated the need for a balanced approach to avoid stifling innovation.


The EU AI Act, scheduled to come into power in 2026, is still just a proposal, but it has already raised concerns about its impact on the American tech ecosystem and potential conflicts between US and EU laws. With extraterritorial jurisdiction and provisions targeting open-source developers and software distributors like GitHub, the Act could create more problems than it solves by encouraging unsafe AI practices and limiting access to advanced AI technologies.


One core issue with the Act is the designation of foundation models in the highest risk category, primarily due to their open-ended nature. A significant risk theme revolves around users creating harmful content and determining who should be held accountable – the users or the platforms. The Senate hearing served as an essential platform to discuss these pressing concerns and work towards a regulatory framework that promotes both safety and innovation in AI.


00:00 Show

01:35 Legals

03:44 Intro

10:33 Altman intro

14:16 Christina Montgomery

18:20 Gary Marcus

23:15 Jobs

26:01 Scorecards

28:08 Harmful content

29:47 Startups

31:35 What meets the definition of harmful?

32:08 Moratorium

36:11 Social Media

46:17 Gary's take on BingGPT and pivot into policy

48:05 Democratisation

May 16, 202349:44
Future of Generative AI [David Foster]

Future of Generative AI [David Foster]

Generative Deep Learning, 2nd Edition [David Foster]

https://www.oreilly.com/library/view/generative-deep-learning/9781098134174/


Support us! https://www.patreon.com/mlst

MLST Discord: https://discord.gg/aNPkGUQtc5

Twitter: https://twitter.com/MLStreetTalk


In this conversation, Tim Scarfe and David Foster, the author of 'Generative Deep Learning,' dive deep into the world of generative AI, discussing topics ranging from model families and auto regressive models to the democratization of AI technology and its potential impact on various industries. They explore the connection between language and true intelligence, as well as the limitations of GPT and other large language models. The discussion also covers the importance of task-independent world models, the concept of active inference, and the potential of combining these ideas with transformer and GPT-style models.


Ethics and regulation in AI development are also discussed, including the need for transparency in data used to train AI models and the responsibility of developers to ensure their creations are not destructive. The conversation touches on the challenges posed by AI-generated content on copyright laws and the diminishing role of effort and skill in copyright due to generative models.


The impact of AI on education and creativity is another key area of discussion, with Tim and David exploring the potential benefits and drawbacks of using AI in the classroom, the need for a balance between traditional learning methods and AI-assisted learning, and the importance of teaching students to use AI tools critically and responsibly.


Generative AI in music is also explored, with David and Tim discussing the potential for AI-generated music to change the way we create and consume art, as well as the challenges in training AI models to generate music that captures human emotions and experiences.


Throughout the conversation, Tim and David touch on the potential risks and consequences of AI becoming too powerful, the importance of maintaining control over the technology, and the possibility of government intervention and regulation. The discussion concludes with a thought experiment about AI predicting human actions and creating transient capabilities that could lead to doom.


TOC:

Introducing Generative Deep Learning [00:00:00]

Model Families in Generative Modeling [00:02:25]

Auto Regressive Models and Recurrence [00:06:26]

Language and True Intelligence [00:15:07]

Language, Reality, and World Models [00:19:10]

AI, Human Experience, and Understanding [00:23:09]

GPTs Limitations and World Modeling [00:27:52]

Task-Independent Modeling and Cybernetic Loop [00:33:55]

Collective Intelligence and Emergence [00:36:01]

Active Inference vs. Reinforcement Learning [00:38:02]

Combining Active Inference with Transformers [00:41:55]

Decentralized AI and Collective Intelligence [00:47:46]

Regulation and Ethics in AI Development [00:53:59]

AI-Generated Content and Copyright Laws [00:57:06]

Effort, Skill, and AI Models in Copyright [00:57:59]

AI Alignment and Scale of AI Models [00:59:51]

Democratization of AI: GPT-3 and GPT-4 [01:03:20]

Context Window Size and Vector Databases [01:10:31]

Attention Mechanisms and Hierarchies [01:15:04]

Benefits and Limitations of Language Models [01:16:04]

AI in Education: Risks and Benefits [01:19:41]

AI Tools and Critical Thinking in the Classroom [01:29:26]

Impact of Language Models on Assessment and Creativity [01:35:09]

Generative AI in Music and Creative Arts [01:47:55]

Challenges and Opportunities in Generative Music [01:52:11]

AI-Generated Music and Human Emotions [01:54:31]

Language Modeling vs. Music Modeling [02:01:58]

Democratization of AI and Industry Impact [02:07:38]

Recursive Self-Improving Superintelligence [02:12:48]

AI Technologies: Positive and Negative Impacts [02:14:44]

Runaway AGI and Control Over AI [02:20:35]

AI Dangers, Cybercrime, and Ethics [02:23:42]

May 11, 202302:31:36
PERPLEXITY AI - The future of search.

PERPLEXITY AI - The future of search.

https://www.perplexity.ai/

https://www.perplexity.ai/iphone

https://www.perplexity.ai/android Interview with Aravind Srinivas, CEO and Co-Founder of Perplexity AI – Revolutionizing Learning with Conversational Search Engines Dr. Tim Scarfe talks with Dr. Aravind Srinivas, CEO and Co-Founder of Perplexity AI, about his journey from studying AI and reinforcement learning at UC Berkeley to launching Perplexity – a startup that aims to revolutionize learning through the power of conversational search engines. By combining the strengths of large language models like GPT-* with search engines, Perplexity provides users with direct answers to their questions in a decluttered user interface, making the learning process not only more efficient but also enjoyable. Aravind shares his insights on how advertising can be made more relevant and less intrusive with the help of large language models, emphasizing the importance of transparency in relevance ranking to improve user experience. He also discusses the challenge of balancing the interests of users and advertisers for long-term success. The interview delves into the challenges of maintaining truthfulness and balancing opinions and facts in a world where algorithmic truth is difficult to achieve. Aravind believes that opinionated models can be useful as long as they don't spread misinformation and are transparent about being opinions. He also emphasizes the importance of allowing users to correct or update information, making the platform more adaptable and dynamic. Lastly, Aravind shares his thoughts on embracing a digital society with large language models, stressing the need for frequent and iterative deployments of these models to reduce fear of AI and misinformation. He envisions a future where using AI tools effectively requires clear thinking and first-principle reasoning, ultimately benefiting society as a whole. Education and transparency are crucial to counter potential misuse of AI for political or malicious purposes.

YT version: https://youtu.be/_vMOWw3uYvk Aravind Srinivas: https://www.linkedin.com/in/aravind-srinivas-16051987/

https://scholar.google.com/citations?user=GhrKC1gAAAAJ&hl=en

https://twitter.com/aravsrinivas?lang=en Interviewer: Dr. Tim Scarfe (CTO XRAI Glass) Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB TOC: Introduction and Background of Perplexity AI [00:00:00]

The Importance of a Decluttered UI and User Experience [00:04:19]

Advertising in Search Engines and Potential Improvements [00:09:02]

Challenges and Opportunities in this new Search Modality [00:18:17]

Benefits of Perplexity and Personalized Learning [00:21:27]

Objective Truth and Personalized Wikipedia [00:26:34]

Opinions and Truth in Answer Engines [00:30:53]

Embracing the Digital Society with Language Models [00:37:30]

Impact on Jobs and Future of Learning [00:40:13]

Educating users on when perplexity works and doesn't work [00:43:13]

Improving user experience and the possibilities of voice-to-voice interaction [00:45:04]

The future of language models and auto-regressive models [00:49:51]

Performance of GPT-4 and potential improvements [00:52:31]

Building the ultimate research and knowledge assistant [00:55:33]

Revolutionizing note-taking and personal knowledge stores [00:58:16] References: Evaluating Verifiability in Generative Search Engines (Nelson F. Liu et al, Stanford University) https://arxiv.org/pdf/2304.09848.pdf Note: this was a sponsored interview.

May 08, 202359:49
#114 - Secrets of Deep Reinforcement Learning (Minqi Jiang)

#114 - Secrets of Deep Reinforcement Learning (Minqi Jiang)

Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB Twitter: https://twitter.com/MLStreetTalk


In this exclusive interview, Dr. Tim Scarfe sits down with Minqi Jiang, a leading PhD student at University College London and Meta AI, as they delve into the fascinating world of deep reinforcement learning (RL) and its impact on technology, startups, and research. Discover how Minqi made the crucial decision to pursue a PhD in this exciting field, and learn from his valuable startup experiences and lessons.

Minqi shares his insights into balancing serendipity and planning in life and research, and explains the role of objectives and Goodhart's Law in decision-making. Get ready to explore the depths of robustness in RL, two-player zero-sum games, and the differences between RL and supervised learning.

As they discuss the role of environment in intelligence, emergence, and abstraction, prepare to be blown away by the possibilities of open-endedness and the intelligence explosion. Learn how language models generate their own training data, the limitations of RL, and the future of software 2.0 with interpretability concerns.

From robotics and open-ended learning applications to learning potential metrics and MDPs, this interview is a goldmine of information for anyone interested in AI, RL, and the cutting edge of technology. Don't miss out on this incredible opportunity to learn from a rising star in the AI world!

TOC

Tech & Startup Background [00:00:00]

Pursuing PhD in Deep RL [00:03:59]

Startup Lessons [00:11:33]

Serendipity vs Planning [00:12:30]

Objectives & Decision Making [00:19:19]

Minimax Regret & Uncertainty [00:22:57]

Robustness in RL & Zero-Sum Games [00:26:14]

RL vs Supervised Learning [00:34:04]

Exploration & Intelligence [00:41:27]

Environment, Emergence, Abstraction [00:46:31]

Open-endedness & Intelligence Explosion [00:54:28]

Language Models & Training Data [01:04:59]

RLHF & Language Models [01:16:37]

Creativity in Language Models [01:27:25]

Limitations of RL [01:40:58]

Software 2.0 & Interpretability [01:45:11]

Language Models & Code Reliability [01:48:23]

Robust Prioritized Level Replay [01:51:42]

Open-ended Learning [01:55:57]

Auto-curriculum & Deep RL [02:08:48]

Robotics & Open-ended Learning [02:31:05]

Learning Potential & MDPs [02:36:20]

Universal Function Space [02:42:02]

Goal-Directed Learning & Auto-Curricula [02:42:48]

Advice & Closing Thoughts [02:44:47]


References:

- Why Greatness Cannot Be Planned: The Myth of the Objective by Kenneth O. Stanley and Joel Lehman

https://www.springer.com/gp/book/9783319155234

- Rethinking Exploration: General Intelligence Requires Rethinking Exploration

https://arxiv.org/abs/2106.06860

- The Case for Strong Emergence (Sabine Hossenfelder)

https://arxiv.org/abs/2102.07740

- The Game of Life (Conway)

https://www.conwaylife.com/

- Toolformer: Teaching Language Models to Generate APIs (Meta AI)

https://arxiv.org/abs/2302.04761

- OpenAI's POET: Paired Open-Ended Trailblazer

https://arxiv.org/abs/1901.01753

- Schmidhuber's Artificial Curiosity

https://people.idsia.ch/~juergen/interest.html

- Gödel Machines

https://people.idsia.ch/~juergen/goedelmachine.html

- PowerPlay

https://arxiv.org/abs/1112.5309

- Robust Prioritized Level Replay: https://openreview.net/forum?id=NfZ6g2OmXEk

- Unsupervised Environment Design: https://arxiv.org/abs/2012.02096

- Excel: Evolving Curriculum Learning for Deep Reinforcement Learning

https://arxiv.org/abs/1901.05431

- Go-Explore: A New Approach for Hard-Exploration Problems

https://arxiv.org/abs/1901.10995

- Learning with AMIGo: Adversarially Motivated Intrinsic Goals

https://www.researchgate.net/publication/342377312_Learning_with_AMIGo_Adversarially_Motivated_Intrinsic_Goals


PRML

https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf

Sutton and Barto

https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf

Apr 16, 202302:47:16
Unlocking the Brain's Mysteries: Chris Eliasmith on Spiking Neural Networks and the Future of Human-Machine Interaction

Unlocking the Brain's Mysteries: Chris Eliasmith on Spiking Neural Networks and the Future of Human-Machine Interaction

Patreon: https://www.patreon.com/mlst

Discord: https://discord.gg/ESrGqhf5CB

Twitter: https://twitter.com/MLStreetTalk


Chris Eliasmith is a renowned interdisciplinary researcher, author, and professor at the University of Waterloo, where he holds the prestigious Canada Research Chair in Theoretical Neuroscience. As the Founding Director of the Centre for Theoretical Neuroscience, Eliasmith leads the Computational Neuroscience Research Group in exploring the mysteries of the brain and its complex functions. His groundbreaking work, including the Neural Engineering Framework, Neural Engineering Objects software environment, and the Semantic Pointer Architecture, has led to the development of Spaun, the most advanced functional brain simulation to date. Among his numerous achievements, Eliasmith has received the 2015 NSERC "Polany-ee" Award and authored two influential books, "How to Build a Brain" and "Neural Engineering."


Chris' homepage:

http://arts.uwaterloo.ca/~celiasmi/


Interviewers: Dr. Tim Scarfe and Dr. Keith Duggar


TOC:


Intro to Chris [00:00:00]

Continuous Representation in Biologically Plausible Neural Networks [00:06:49]

Legendre Memory Unit and Spatial Semantic Pointer [00:14:36]

Large Contexts and Data in Language Models [00:20:30]

Spatial Semantic Pointers and Continuous Representations [00:24:38]

Auto Convolution [00:30:12]

Abstractions and the Continuity [00:36:33]

Compression, Sparsity, and Brain Representations [00:42:52]

Continual Learning and Real-World Interactions [00:48:05]

Robust Generalization in LLMs and Priors [00:56:11]

Chip design [01:00:41]

Chomsky + Computational Power of NNs and Recursion [01:04:02]

Spiking Neural Networks and Applications [01:13:07]

Limits of Empirical Learning [01:22:43]

Philosophy of Mind, Consciousness etc [01:25:35]

Future of human machine interaction [01:41:28]

Future research and advice to young researchers [01:45:06]


Refs:

http://compneuro.uwaterloo.ca/publications/dumont2023.html 

http://compneuro.uwaterloo.ca/publications/voelker2019lmu.html 

http://compneuro.uwaterloo.ca/publications/voelker2018.html

http://compneuro.uwaterloo.ca/publications/lu2019.html 

https://www.youtube.com/watch?v=I5h-xjddzlY

Apr 10, 202301:49:37
#112 AVOIDING AGI APOCALYPSE - CONNOR LEAHY

#112 AVOIDING AGI APOCALYPSE - CONNOR LEAHY

Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 In this podcast with the legendary Connor Leahy (CEO Conjecture) recorded in Dec 2022, we discuss various topics related to artificial intelligence (AI), including AI alignment, the success of ChatGPT, the potential threats of artificial general intelligence (AGI), and the challenges of balancing research and product development at his company, Conjecture. He emphasizes the importance of empathy, dehumanizing our thinking to avoid anthropomorphic biases, and the value of real-world experiences in learning and personal growth. The conversation also covers the Orthogonality Thesis, AI preferences, the mystery of mode collapse, and the paradox of AI alignment. Connor Leahy expresses concern about the rapid development of AI and the potential dangers it poses, especially as AI systems become more powerful and integrated into society. He argues that we need a better understanding of AI systems to ensure their safe and beneficial development. The discussion also touches on the concept of "futuristic whack-a-mole," where futurists predict potential AGI threats, and others try to come up with solutions for those specific scenarios. However, the problem lies in the fact that there could be many more scenarios that neither party can think of, especially when dealing with a system that's smarter than humans. https://www.linkedin.com/in/connor-j-leahy/https://twitter.com/NPCollapse Interviewer: Dr. Tim Scarfe (Innovation CTO @ XRAI Glass https://xrai.glass/) TOC: The success of ChatGPT and its impact on the AI field [00:00:00] Subjective experience [00:15:12] AI Architectural discussion including RLHF [00:18:04] The paradox of AI alignment and the future of AI in society [00:31:44] The impact of AI on society and politics [00:36:11] Future shock levels and the challenges of predicting the future [00:45:58] Long termism and existential risk [00:48:23] Consequentialism vs. deontology in rationalism [00:53:39] The Rationalist Community and its Challenges [01:07:37] AI Alignment and Conjecture [01:14:15] Orthogonality Thesis and AI Preferences [01:17:01] Challenges in AI Alignment [01:20:28] Mechanistic Interpretability in Neural Networks [01:24:54] Building Cleaner Neural Networks [01:31:36] Cognitive horizons / The problem with rapid AI development [01:34:52] Founding Conjecture and raising funds [01:39:36] Inefficiencies in the market and seizing opportunities [01:45:38] Charisma, authenticity, and leadership in startups [01:52:13] Autistic culture and empathy [01:55:26] Learning from real-world experiences [02:01:57] Technical empathy and transhumanism [02:07:18] Moral status and the limits of empathy [02:15:33] Anthropomorphic Thinking and Consequentialism [02:17:42] Conjecture: Balancing Research and Product Development [02:20:37] Epistemology Team at Conjecture [02:31:07] Interpretability and Deception in AGI [02:36:23] Futuristic whack-a-mole and predicting AGI threats [02:38:27] Refs: 1. OpenAI's ChatGPT: https://chat.openai.com/ 2. The Mystery of Mode Collapse (Article): https://www.lesswrong.com/posts/t9svvNPNmFf5Qa3TA/mysteries-of-mode-collapse 3. The Rationalist Guide to the Galaxy https://www.amazon.co.uk/Does-Not-Hate-You-Superintelligence/dp/1474608795 5. Alfred Korzybski: https://en.wikipedia.org/wiki/Alfred_Korzybski 6. Instrumental Convergence: https://en.wikipedia.org/wiki/Instrumental_convergence 7. Orthogonality Thesis: https://en.wikipedia.org/wiki/Orthogonality_thesis 8. Brian Tomasik's Essays on Reducing Suffering: https://reducing-suffering.org/ 9. Epistemological Framing for AI Alignment Research: https://www.lesswrong.com/posts/Y4YHTBziAscS5WPN7/epistemological-framing-for-ai-alignment-research 10. How to Defeat Mind readers: https://www.alignmentforum.org/posts/EhAbh2pQoAXkm9yor/circumventing-interpretability-how-to-defeat-mind-readers 11. Society of mind: https://www.amazon.co.uk/Society-Mind-Marvin-Minsky/dp/0671607405

Apr 02, 202302:40:14
#111 - AI moratorium, Eliezer Yudkowsky, AGI risk etc

#111 - AI moratorium, Eliezer Yudkowsky, AGI risk etc

Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5

Send us a voice message which you want us to publish: https://podcasters.spotify.com/pod/show/machinelearningstreettalk/message In a recent open letter, over 1500 individuals called for a six-month pause on the development of advanced AI systems, expressing concerns over the potential risks AI poses to society and humanity. However, there are issues with this approach, including global competition, unstoppable progress, potential benefits, and the need to manage risks instead of avoiding them. Decision theorist Eliezer Yudkowsky took it a step further in a Time magazine article, calling for an indefinite and worldwide moratorium on Artificial General Intelligence (AGI) development, warning of potential catastrophe if AGI exceeds human intelligence. Yudkowsky urged for an immediate halt to all large AI training runs and the shutdown of major GPU clusters, calling for international cooperation to enforce these measures. However, several counterarguments question the validity of Yudkowsky's concerns:

1. Hard limits on AGI 2. Dismissing AI extinction risk 3. Collective action problem 4. Misplaced focus on AI threats While the potential risks of AGI cannot be ignored, it is essential to consider various arguments and potential solutions before making drastic decisions. As AI continues to advance, it is crucial for researchers, policymakers, and society as a whole to engage in open and honest discussions about the potential consequences and the best path forward. With a balanced approach to AGI development, we may be able to harness its power for the betterment of humanity while mitigating its risks. Eliezer Yudkowsky: https://en.wikipedia.org/wiki/Eliezer_Yudkowsky Connor Leahy: https://twitter.com/NPCollapse (we will release that interview soon) Gary Marcus: http://garymarcus.com/index.html Tim Scarfe is the innovation CTO of XRAI Glass: https://xrai.glass/ Gary clip filmed at AIUK https://ai-uk.turing.ac.uk/programme/ and our appreciation to them for giving us a press pass. Check out their conference next year! WIRED clip from Gary came from here: https://www.youtube.com/watch?v=Puo3VkPkNZ4 Refs:


Statement from the listed authors of Stochastic Parrots on the “AI pause” letterTimnit Gebru, Emily M. Bender, Angelina McMillan-Major, Margaret Mitchell

https://www.dair-institute.org/blog/letter-statement-March2023 Eliezer Yudkowsky on Lex: https://www.youtube.com/watch?v=AaTRHFaaPG8 Pause Giant AI Experiments: An Open Letter https://futureoflife.org/open-letter/pause-giant-ai-experiments/ Pausing AI Developments Isn't Enough. We Need to Shut it All Down (Eliezer Yudkowsky) https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

Apr 01, 202326:58
#110 Dr. STEPHEN WOLFRAM - HUGE ChatGPT+Wolfram announcement!

#110 Dr. STEPHEN WOLFRAM - HUGE ChatGPT+Wolfram announcement!

HUGE ANNOUNCEMENT, CHATGPT+WOLFRAM! You saw it HERE first! YT version: https://youtu.be/z5WZhCBRDpU Support us! https://www.patreon.com/mlst

MLST Discord: https://discord.gg/aNPkGUQtc5 Stephen's announcement post: https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/ OpenAI's announcement post: https://openai.com/blog/chatgpt-plugins In an era of technology and innovation, few individuals have left as indelible a mark on the fabric of modern science as our esteemed guest, Dr. Steven Wolfram. Dr. Wolfram is a renowned polymath who has made significant contributions to the fields of physics, computer science, and mathematics. A prodigious young man too, Wolfram earned a Ph.D. in theoretical physics from the California Institute of Technology by the age of 20. He became the youngest recipient of the prestigious MacArthur Fellowship at the age of 21. Wolfram's groundbreaking computational tool, Mathematica, was launched in 1988 and has become a cornerstone for researchers and innovators worldwide. In 2002, he published "A New Kind of Science," a paradigm-shifting work that explores the foundations of science through the lens of computational systems. In 2009, Wolfram created Wolfram Alpha, a computational knowledge engine utilized by millions of users worldwide. His current focus is on the Wolfram Language, a powerful programming language designed to democratize access to cutting-edge technology. Wolfram's numerous accolades include honorary doctorates and fellowships from prestigious institutions. As an influential thinker, Dr. Wolfram has dedicated his life to unraveling the mysteries of the universe and making computation accessible to all. First of all... we have an announcement to make, you heard it FIRST here on MLST! .... Intro [00:00:00] Big announcement! Wolfram + ChatGPT! [00:02:57] What does it mean to understand? [00:05:33] Feeding information back into the model [00:13:48] Semantics and cognitive categories [00:20:09] Navigating the ruliad [00:23:50] Computational irreducibility [00:31:39] Conceivability and interestingness [00:38:43] Human intelligible sciences [00:43:43]

Mar 23, 202357:30
#109 - Dr. DAN MCQUILLAN - Resisting AI

#109 - Dr. DAN MCQUILLAN - Resisting AI

YT version: https://youtu.be/P1j3VoKBxbc (references in pinned comment) Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Dan McQuillan, a visionary in digital culture and social innovation, emphasizes the importance of understanding technology's complex relationship with society. As an academic at Goldsmiths, University of London, he fosters interdisciplinary collaboration and champions data-driven equity and ethical technology. Dan's career includes roles at Amnesty International and Social Innovation Camp, showcasing technology's potential to empower and bring about positive change. In this conversation, we discuss the challenges and opportunities at the intersection of technology and society, exploring the profound impact of our digital world. Interviewer: Dr. Tim Scarfe


[00:00:00] Dan's background and journey to academia

[00:03:30] Dan's background and journey to academia

[00:04:10] Writing the book "Resisting AI"

[00:08:30] Necropolitics and its relation to AI

[00:10:06] AI as a new form of colonization

[00:12:57] LLMs as a new form of neo-techno-imperialism

[00:15:47] Technology for good and AGI's skewed worldview

[00:17:49] Transhumanism, eugenics, and intelligence

[00:20:45] Valuing differences (disability) and challenging societal norms

[00:26:08] Re-ontologizing and the philosophy of information

[00:28:19] New materialism and the impact of technology on society

[00:30:32] Intelligence, meaning, and materiality

[00:31:43] The constraints of physical laws and the importance of science

[00:32:44] Exploring possibilities to reduce suffering and increase well-being

[00:33:29] The division between meaning and material in our experiences

[00:35:36] Machine learning, data science, and neoplatonic approach to understanding reality

[00:37:56] Different understandings of cognition, thought, and consciousness

[00:39:15] Enactivism and its variants in cognitive science

[00:40:58] Jordan Peterson

[00:44:47] Relationism, relativism, and finding the correct relational framework

[00:47:42] Recognizing privilege and its impact on social interactions

[00:49:10] Intersectionality / Feminist thinking and the concept of care in social structures

[00:51:46] Intersectionality and its role in understanding social inequalities

[00:54:26] The entanglement of history, technology, and politics

[00:57:39] ChatGPT article - we come to bury ChatGPT

[00:59:41] Statistical pattern learning and convincing patterns in AI

[01:01:27] Anthropomorphization and understanding in AI

[01:03:26] AI in education and critical thinking

[01:06:09] European Union policies and trustable AI

[01:07:52] AI reliability and the halo effect

[01:09:26] AI as a tool enmeshed in society

[01:13:49] Luddites

[01:15:16] AI is a scam

[01:15:31] AI and Social Relations

[01:16:49] Invisible Labor in AI and Machine Learning

[01:21:09] Exploititative AI / alignment

[01:23:50] Science fiction AI / moral frameworks

[01:27:22] Discussing Stochastic Parrots and Nihilism

[01:30:36] Human Intelligence vs. Language Models

[01:32:22] Image Recognition and Emulation vs. Experience

[01:34:32] Thought Experiments and Philosophy in AI Ethics (mimicry)

[01:41:23] Abstraction, reduction, and grounding in reality

[01:43:13] Process philosophy and the possibility of change

[01:49:55] Mental health, AI, and epistemic injustice

[01:50:30] Hermeneutic injustice and gendered techniques

[01:53:57] AI and politics

[01:59:24] Epistemic injustice and testimonial injustice

[02:11:46] Fascism and AI discussion

[02:13:24] Violence in various systems

[02:16:52] Recognizing systemic violence

[02:22:35] Fascism in Today's Society

[02:33:33] Pace and Scale of Technological Change

[02:37:38] Alternative approaches to AI and society

[02:44:09] Self-Organization at Successive Scales / cybernetics

Mar 20, 202302:51:03
#108 - Dr. JOEL LEHMAN - Machine Love [Staff Favourite]

#108 - Dr. JOEL LEHMAN - Machine Love [Staff Favourite]

Support us! https://www.patreon.com/mlst  

MLST Discord: https://discord.gg/aNPkGUQtc5


We are honoured to welcome Dr. Joel Lehman, an eminent machine learning research scientist, whose work in AI safety, reinforcement learning, creative open-ended search algorithms, and indeed the philosophy of open-endedness and abandoning objectives has paved the way for innovative ideas that challenge our preconceptions and inspire new visions for the future.

Dr. Lehman's thought-provoking book, "Why Greatness Cannot Be Planned" penned with with our MLST favourite Professor Kenneth Stanley has left an indelible mark on the field and profoundly impacted the way we view innovation and the serendipitous nature of discovery. Those of you who haven't watched our special edition show on that, should do so at your earliest convenience! Building upon this foundation, Dr. Lehman has ventured into the domain of AI systems that embody principles of love, care, responsibility, respect, and knowledge, drawing from the works of Maslow, Erich Fromm, and positive psychology.


YT version: https://youtu.be/23-TXgJEv-Q


http://joellehman.com/

https://twitter.com/joelbot3000


Interviewer: Dr. Tim Scarfe


TOC:

Intro [00:00:00]

Model [00:04:26]

Intro and Paper Intro [00:08:52]

Subjectivity [00:16:07]

Reflections on Greatness Book [00:19:30]

Representing Subjectivity [00:29:24]

Nagal's Bat [00:31:49]

Abstraction [00:38:58]

Love as Action Rather Than Feeling [00:42:58]

Reontologisation [00:57:38]

Self Help [01:04:15]

Meditation [01:09:02]

The Human Reward Function / Effective... [01:16:52]

Machine Hate [01:28:32]

Societal Harms [01:31:41]

Lenses We Use Obscuring Reality [01:56:36]

Meta Optimisation and Evolution [02:03:14]

Conclusion [02:07:06]


References:


What Is It Like to Be a Bat? (Thomas Nagel)

https://warwick.ac.uk/fac/cross_fac/iatl/study/ugmodules/humananimalstudies/lectures/32/nagel_bat.pdf


Why Greatness Cannot Be Planned: The Myth of the Objective (Kenneth O. Stanley and Joel Lehman)

https://link.springer.com/book/10.1007/978-3-319-15524-1 


Machine Love (Joel Lehman)

https://arxiv.org/abs/2302.09248 


How effective altruists ignored risk (Carla Cremer)

https://www.vox.com/future-perfect/23569519/effective-altrusim-sam-bankman-fried-will-macaskill-ea-risk-decentralization-philanthropy


Philosophy tube - The Rich Have Their Own Ethics: Effective Altruism

https://www.youtube.com/watch?v=Lm0vHQYKI-Y


Abandoning Objectives: Evolution through the Search for Novelty Alone (Joel Lehman and Kenneth O. Stanley)

https://www.cs.swarthmore.edu/~meeden/DevelopmentalRobotics/lehman_ecj11.pdf

Mar 16, 202302:09:39
#107 - Dr. RAPHAËL MILLIÈRE - Linguistics, Theory of Mind, Grounding

#107 - Dr. RAPHAËL MILLIÈRE - Linguistics, Theory of Mind, Grounding

Support us! https://www.patreon.com/mlst

MLST Discord: https://discord.gg/aNPkGUQtc5

Dr. Raphaël Millière is the 2020 Robert A. Burt Presidential Scholar in Society and Neuroscience in the Center for Science and Society, and a Lecturer in the Philosophy Department at Columbia University. His research draws from his expertise in philosophy and cognitive science to explore the implications of recent progress in deep learning for models of human cognition, as well as various issues in ethics and aesthetics. He is also investigating what underlies the capacity to represent oneself as oneself at a fundamental level, in humans and non-human animals; as well as the role that self-representation plays in perception, action, and memory. In a world where technology is rapidly advancing, Dr. Millière is striving to gain a better understanding of how artificial neural networks work, and to establish fair and meaningful comparisons between humans and machines in various domains in order to shed light on the implications of artificial intelligence for our lives.

https://www.raphaelmilliere.com/

https://twitter.com/raphaelmilliere


Here is a version with hesitation sounds like "um" removed if you prefer (I didn't notice them personally): https://share.descript.com/view/aGelyTl2xpN

YT: https://www.youtube.com/watch?v=fhn6ZtD6XeE


TOC:

Intro to Raphael [00:00:00]

Intro: Moving Beyond Mimicry in Artificial Intelligence (Raphael Millière) [00:01:18]

Show Kick off [00:07:10]

LLMs [00:08:37]

Semantic Competence/Understanding [00:18:28]

Forming Analogies/JPG Compression Article [00:30:17]

Compositional Generalisation [00:37:28]

Systematicity [00:47:08]

Language of Thought [00:51:28]

Bigbench (Conceptual Combinations) [00:57:37]

Symbol Grounding [01:11:13]

World Models [01:26:43]

Theory of Mind [01:30:57]


Refs (this is truncated, full list on YT video description):


Moving Beyond Mimicry in Artificial Intelligence (Raphael Millière)

https://nautil.us/moving-beyond-mimicry-in-artificial-intelligence-238504/


On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 (Bender et al)

https://dl.acm.org/doi/10.1145/3442188.3445922


ChatGPT Is a Blurry JPEG of the Web (Ted Chiang)

https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web


The Debate Over Understanding in AI's Large Language Models (Melanie Mitchell)

https://arxiv.org/abs/2210.13966


Talking About Large Language Models (Murray Shanahan)

https://arxiv.org/abs/2212.03551


Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data (Bender)

https://aclanthology.org/2020.acl-main.463/


The symbol grounding problem (Stevan Harnad)

https://arxiv.org/html/cs/9906002


Why the Abstraction and Reasoning Corpus is interesting and important for AI (Mitchell)

https://aiguide.substack.com/p/why-the-abstraction-and-reasoning


Linguistic relativity (Sapir–Whorf hypothesis)

https://en.wikipedia.org/wiki/Linguistic_relativity


Cooperative principle (Grice's four maxims of conversation - quantity, quality, relation, and manner)

https://en.wikipedia.org/wiki/Cooperative_principle

Mar 13, 202301:43:55
#106 - Prof. KARL FRISTON 3.0 - Collective Intelligence [Special Edition]

#106 - Prof. KARL FRISTON 3.0 - Collective Intelligence [Special Edition]

This show is sponsored by Numerai, please visit them here with our sponsor link (we would really appreciate it) http://numer.ai/mlst 

Prof. Karl Friston recently proposed a vision of artificial intelligence that goes beyond machines and algorithms, and embraces humans and nature as part of a cyber-physical ecosystem of intelligence. This vision is based on the principle of active inference, which states that intelligent systems can learn from their observations and act on their environment to reduce uncertainty and achieve their goals. This leads to a formal account of collective intelligence that rests on shared narratives and goals. 

To realize this vision, Friston suggests developing a shared hyper-spatial modelling language and transaction protocol, as well as novel methods for measuring and optimizing collective intelligence. This could harness the power of artificial intelligence for the common good, without compromising human dignity or autonomy. It also challenges us to rethink our relationship with technology, nature, and each other, and invites us to join a global community of sense-makers who are curious about the world and eager to improve it.


YT version: https://www.youtube.com/watch?v=V_VXOdf1NMw

Support us! https://www.patreon.com/mlst 

MLST Discord: https://discord.gg/aNPkGUQtc5


TOC: 

Intro [00:00:00]

Numerai (Sponsor segment) [00:07:10]

Designing Ecosystems of Intelligence from First Principles (Friston et al) [00:09:48]

Information / Infosphere and human agency [00:18:30]

Intelligence [00:31:38]

Reductionism [00:39:36]

Universalism [00:44:46]

Emergence [00:54:23]

Markov blankets [01:02:11]

Whole part relationships / structure learning [01:22:33]

Enactivism [01:29:23]

Knowledge and Language [01:43:53]

ChatGPT [01:50:56]

Ethics (is-ought) [02:07:55]

Can people be evil? [02:35:06]

Ethics in Al, subjectiveness [02:39:05]

Final thoughts [02:57:00]


References:

Designing Ecosystems of Intelligence from First Principles (Friston et al)

https://arxiv.org/abs/2212.01354


GLOM - How to represent part-whole hierarchies in a neural network (Hinton)

https://arxiv.org/pdf/2102.12627.pdf


Seven Brief Lessons on Physics (Carlo Rovelli)

https://www.amazon.co.uk/Seven-Brief-Lessons-Physics-Rovelli/dp/0141981725


How Emotions Are Made: The Secret Life of the Brain (Lisa Feldman Barrett)

https://www.amazon.co.uk/How-Emotions-Are-Made-Secret/dp/B01N3D4OON


Am I Self-Conscious? (Or Does Self-Organization Entail Self-Consciousness?) (Karl Friston)

https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00579/full


Integrated information theory (Giulio Tononi)

https://en.wikipedia.org/wiki/Integrated_information_theory

Mar 11, 202302:59:21
#105 - Dr. MICHAEL OLIVER [CSO - Numerai]

#105 - Dr. MICHAEL OLIVER [CSO - Numerai]

Access Numerai here: http://numer.ai/mlst


Michael Oliver is the Chief Scientist at Numerai, a hedge fund that crowdsources machine learning models from data scientists. He has a PhD in Computational Neuroscience from UC Berkeley and was a postdoctoral researcher at the Allen Institute for Brain Science before joining Numerai in 2020. He is also the host of Numerai Quant Club, a YouTube series where he discusses Numerai’s research, data and challenges.


YT version: https://youtu.be/61s8lLU7sFg


TOC:

[00:00:00] Introduction to Michael and Numerai

[00:02:03] Understanding / new Bing

[00:22:47] Quant vs Neuroscience

[00:36:43] Role of language in cognition and planning, and subjective... 

[00:45:47] Boundaries in finance modelling

[00:48:00] Numerai

[00:57:37] Aggregation systems

[01:00:52] Getting started on Numeral

[01:03:21] What models are people using

[01:04:23] Numerai Problem Setup

[01:05:49] Regimes in financial data and quant talk

[01:11:18] Esoteric approaches used on Numeral?

[01:13:59]  Curse of dimensionality

[01:16:32] Metrics

[01:19:10] Outro


References:


Growing Neural Cellular Automata (Alexander Mordvintsev)

https://distill.pub/2020/growing-ca/


A Thousand Brains: A New Theory of Intelligence (Jeff Hawkins)

https://www.amazon.fr/Thousand-Brains-New-Theory-Intelligence/dp/1541675819


Perceptual Neuroscience: The Cerebral Cortex (Vernon B. Mountcastle)

https://www.amazon.ca/Perceptual-Neuroscience-Cerebral-Vernon-Mountcastle/dp/0674661885


Numerai Quant Club with Michael Oliver

https://www.youtube.com/watch?v=eLIxarbDXuQ&list=PLz3D6SeXhT3tTu8rhZmjwDZpkKi-UPO1F


Numerai YT channel

https://www.youtube.com/@Numerai/featured


Support us! https://www.patreon.com/mlst 

MLST Discord: https://discord.gg/aNPkGUQtc5

Mar 04, 202301:20:42
#104 - Prof. CHRIS SUMMERFIELD - Natural General Intelligence [SPECIAL EDITION]

#104 - Prof. CHRIS SUMMERFIELD - Natural General Intelligence [SPECIAL EDITION]

Support us! https://www.patreon.com/mlst  

MLST Discord: https://discord.gg/aNPkGUQtc5


Christopher Summerfield, Department of Experimental Psychology, University of Oxford is a Professor of Cognitive Neuroscience at the University of Oxford and a Research Scientist at Deepmind UK. His work focusses on the neural and computational mechanisms by which humans make decisions.

Chris has just released an incredible new book on AI called "Natural General Intelligence". It's my favourite book on AI I have read so so far. 

The book explores the algorithms and architectures that are driving progress in AI research, and discusses intelligence in the language of psychology and biology, using examples and analogies to be comprehensible to a wide audience. It also tackles longstanding theoretical questions about the nature of thought and knowledge.

With Chris' permission, I read out a summarised version of Chapter 2 from his book on which was on Intelligence during the 30 minute MLST introduction.  

Buy his book here:

https://global.oup.com/academic/product/natural-general-intelligence-9780192843883?cc=gb&lang=en&


YT version: https://youtu.be/31VRbxAl3t0

Interviewer: Dr. Tim Scarfe


TOC:

[00:00:00] Walk and talk with Chris on Knowledge and Abstractions

[00:04:08] Intro to Chris and his book

[00:05:55] (Intro) Tim reads Chapter 2: Intelligence 

[00:09:28] Intro continued: Goodhart's law

[00:15:37] Intro continued: The "swiss cheese" situation  

[00:20:23] Intro continued: On Human Knowledge

[00:23:37] Intro continued: Neats and Scruffies

[00:30:22] Interview kick off 

[00:31:59] What does it mean to understand?

[00:36:18] Aligning our language models

[00:40:17] Creativity 

[00:41:40] "Meta" AI and basins of attraction 

[00:51:23] What can Neuroscience impart to AI

[00:54:43] Sutton, neats and scruffies and human alignment

[01:02:05] Reward is enough

[01:19:46] Jon Von Neumann and Intelligence

[01:23:56] Compositionality


References:


The Language Game (Morten H. Christiansen, Nick Chater

https://www.penguin.co.uk/books/441689/the-language-game-by-morten-h-christiansen-and--nick-chater/9781787633483

Theory of general factor (Spearman)

https://www.proquest.com/openview/7c2c7dd23910c89e1fc401e8bb37c3d0/1?pq-origsite=gscholar&cbl=1818401

Intelligence Reframed (Howard Gardner)

https://books.google.co.uk/books?hl=en&lr=&id=Qkw4DgAAQBAJ&oi=fnd&pg=PT6&dq=howard+gardner+multiple+intelligences&ots=ERUU0u5Usq&sig=XqiDgNUIkb3K9XBq0vNbFmXWKFs#v=onepage&q=howard%20gardner%20multiple%20intelligences&f=false

The master algorithm (Pedro Domingos)

https://www.amazon.co.uk/Master-Algorithm-Ultimate-Learning-Machine/dp/0241004543

A Thousand Brains: A New Theory of Intelligence (Jeff Hawkins)

https://www.amazon.co.uk/Thousand-Brains-New-Theory-Intelligence/dp/1541675819

The bitter lesson (Rich Sutton)

http://www.incompleteideas.net/IncIdeas/BitterLesson.html

Feb 22, 202301:28:55
#103 - Prof. Edward Grefenstette - Language, Semantics, Philosophy

#103 - Prof. Edward Grefenstette - Language, Semantics, Philosophy

Support us! https://www.patreon.com/mlst 

MLST Discord: https://discord.gg/aNPkGUQtc5

YT: https://youtu.be/i9VPPmQn9HQ


Edward Grefenstette is a Franco-American computer scientist who currently serves as Head of Machine Learning at Cohere and Honorary Professor at UCL. He has previously been a research scientist at Facebook AI Research and staff research scientist at DeepMind, and was also the CTO of Dark Blue Labs. Prior to his move to industry, Edward was a Fulford Junior Research Fellow at Somerville College, University of Oxford, and was lecturing at Hertford College. He obtained his BSc in Physics and Philosophy from the University of Sheffield and did graduate work in the philosophy departments at the University of St Andrews. His research draws on topics and methods from Machine Learning, Computational Linguistics and Quantum Information Theory, and has done work implementing and evaluating compositional vector-based models of natural language semantics and empirical semantic knowledge discovery.


https://www.egrefen.com/

https://cohere.ai/


TOC:

[00:00:00] Introduction

[00:02:52] Differential Semantics

[00:06:56] Concepts

[00:10:20] Ontology

[00:14:02] Pragmatics

[00:16:55] Code helps with language

[00:19:02] Montague

[00:22:13] RLHF

[00:31:54] Swiss cheese problem / retrieval augmented

[00:37:06] Intelligence / Agency

[00:43:33] Creativity

[00:46:41] Common sense

[00:53:46] Thinking vs knowing



References:


Large language models are not zero-shot communicators (Laura Ruis)

https://arxiv.org/abs/2210.14986


Some remarks on Large Language Models (Yoav Goldberg)

https://gist.github.com/yoavg/59d174608e92e845c8994ac2e234c8a9


Quantum Natural Language Processing (Bob Coecke)

https://www.cs.ox.ac.uk/people/bob.coecke/QNLP-ACT.pdf


Constitutional AI: Harmlessness from AI Feedback

https://www.anthropic.com/constitutional.pdf


Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks (Patrick Lewis)

https://www.patricklewis.io/publication/rag/


Natural General Intelligence (Prof. Christopher Summerfield)

https://global.oup.com/academic/product/natural-general-intelligence-9780192843883


ChatGPT with Rob Miles - Computerphile

https://www.youtube.com/watch?v=viJt_DXTfwA

Feb 11, 202301:01:47
#102 - Prof. MICHAEL LEVIN, Prof. IRINA RISH - Emergence, Intelligence, Transhumanism

#102 - Prof. MICHAEL LEVIN, Prof. IRINA RISH - Emergence, Intelligence, Transhumanism

Support us! https://www.patreon.com/mlst

MLST Discord: https://discord.gg/aNPkGUQtc5

YT: https://youtu.be/Vbi288CKgis


Michael Levin is a Distinguished Professor in the Biology department at Tufts University, and the holder of the Vannevar Bush endowed Chair. He is the Director of the Allen Discovery Center at Tufts and the Tufts Center for Regenerative and Developmental Biology. His research focuses on understanding the biophysical mechanisms of pattern regulation and harnessing endogenous bioelectric dynamics for rational control of growth and form.

The capacity to generate a complex, behaving organism from the single cell of a fertilized egg is one of the most amazing aspects of biology. Levin' lab integrates approaches from developmental biology, computer science, and cognitive science to investigate the emergence of form and function. Using biophysical and computational modeling approaches, they seek to understand the collective intelligence of cells, as they navigate physiological, transcriptional, morphognetic, and behavioral spaces. They develop conceptual frameworks for basal cognition and diverse intelligence, including synthetic organisms and AI.

Also joining us this evening is Irina Rish. Irina is a Full Professor at the Université de Montréal's Computer Science and Operations Research department, a core member of Mila - Quebec AI Institute, as well as the holder of the Canada CIFAR AI Chair and the Canadian Excellence Research Chair in Autonomous AI. She has a PhD in AI from UC Irvine. Her research focuses on machine learning, neural data analysis, neuroscience-inspired AI, continual lifelong learning, optimization algorithms, sparse modelling, probabilistic inference, dialog generation, biologically plausible reinforcement learning, and dynamical systems approaches to brain imaging analysis. 

Interviewer: Dr. Tim Scarfe


TOC:

[00:00:00] Introduction

[00:02:09] Emergence

[00:13:16] Scaling Laws

[00:23:12] Intelligence

[00:44:36] Transhumanism


Prof. Michael Levin

https://en.wikipedia.org/wiki/Michael_Levin_(biologist)

https://www.drmichaellevin.org/

https://twitter.com/drmichaellevin


Prof. Irina Rish

https://twitter.com/irinarish

https://irina-rish.com/

Feb 11, 202355:17
#100 Dr. PATRICK LEWIS (co:here) - Retrieval Augmented Generation

#100 Dr. PATRICK LEWIS (co:here) - Retrieval Augmented Generation

Dr. Patrick Lewis is a London-based AI and Natural Language Processing Research Scientist, working at co:here. Prior to this, Patrick worked as a research scientist at the Fundamental AI Research Lab (FAIR) at Meta AI. During his PhD, Patrick split his time between FAIR and University College London, working with Sebastian Riedel and Pontus Stenetorp. 

Patrick’s research focuses on the intersection of information retrieval techniques (IR) and large language models (LLMs). He has done extensive work on Retrieval-Augmented Language Models. His current focus is on building more powerful, efficient, robust, and update-able models that can perform well on a wide range of NLP tasks, but also excel on knowledge-intensive NLP tasks such as Question Answering and Fact Checking.


YT version: https://youtu.be/Dm5sfALoL1Y

MLST Discord: https://discord.gg/aNPkGUQtc5

Support us! https://www.patreon.com/mlst


References:

Patrick Lewis (Natural Language Processing Research Scientist @ co:here)

https://www.patricklewis.io/

Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks (Patrick Lewis et al)

https://arxiv.org/abs/2005.11401

Atlas: Few-shot Learning with Retrieval Augmented Language Models (Gautier Izacard, Patrick Lewis, et al)

https://arxiv.org/abs/2208.03299

Improving language models by retrieving from trillions of tokens (RETRO) (Sebastian Borgeaud et al)

https://arxiv.org/abs/2112.04426

Feb 10, 202326:28
#99 - CARLA CREMER & IGOR KRAWCZUK - X-Risk, Governance, Effective Altruism

#99 - CARLA CREMER & IGOR KRAWCZUK - X-Risk, Governance, Effective Altruism

YT version (with references): https://www.youtube.com/watch?v=lxaTinmKxs0

Support us! https://www.patreon.com/mlst

MLST Discord: https://discord.gg/aNPkGUQtc5


Carla Cremer and Igor Krawczuk argue that AI risk should be understood as an old problem of politics, power and control with known solutions, and that threat models should be driven by empirical work. The interaction between FTX and the Effective Altruism community has sparked a lot of discussion about the dangers of optimization, and Carla's Vox article highlights the need for an institutional turn when taking on a responsibility like risk management for humanity.


Carla's “Democratizing Risk” paper found that certain types of risks fall through the cracks if they are just categorized into climate change or biological risks. Deliberative democracy has been found to be a better way to make decisions, and AI tools can be used to scale this type of democracy and be used for good, but the transparency of these algorithms to the citizens using the platform must be taken into consideration.


Aggregating people’s diverse ways of thinking about a problem and creating a risk-averse procedure gives a likely, highly probable outcome for having converged on the best policy. There needs to be a good reason to trust one organization with the risk management of humanity and all the different ways of thinking about risk must be taken into account. AI tools can help to scale this type of deliberative democracy, but the transparency of these algorithms must be taken into consideration.


The ambition of the EA community and Altruism Inc. is to protect and do risk management for the whole of humanity and this requires an institutional turn in order to do it effectively. The dangers of optimization are real, and it is essential to ensure that the risk management of humanity is done properly and ethically. By understanding the importance of aggregating people’s diverse ways of thinking about a problem, and creating a risk-averse procedure, it is possible to create a likely, highly probable outcome for having converged on the best policy.


Carla Zoe Cremer

https://carlacremer.github.io/


Igor Krawczuk

https://krawczuk.eu/


Interviewer: Dr. Tim Scarfe


TOC:

[00:00:00] Introduction: Vox article and effective altruism / FTX

[00:11:12] Luciano Floridi on Governance and Risk

[00:15:50] Connor Leahy on alignment

[00:21:08] Ethan Caballero on scaling

[00:23:23] Alignment, Values and politics

[00:30:50] Singularitarians vs AI-thiests

[00:41:56] Consequentialism

[00:46:44] Does scale make a difference?

[00:51:53] Carla's Democratising risk paper

[01:04:03] Vox article - How effective altruists ignored risk

[01:20:18] Does diversity breed complexity?

[01:29:50] Collective rationality

[01:35:16] Closing statements

Feb 05, 202301:39:46
[NO MUSIC] #98 - Prof. LUCIANO FLORIDI - ChatGPT, Singularitarians, Ethics, Philosophy of Information

[NO MUSIC] #98 - Prof. LUCIANO FLORIDI - ChatGPT, Singularitarians, Ethics, Philosophy of Information

Support us! https://www.patreon.com/mlst

MLST Discord: https://discord.gg/aNPkGUQtc5

YT version: https://youtu.be/YLNGvvgq3eg


We are living in an age of rapid technological advancement, and with this growth comes a digital divide. Professor Luciano Floridi of the Oxford Internet Institute / Oxford University believes that this divide not only affects our understanding of the implications of this new age, but also the organization of a fair society. 

The Information Revolution has been transforming the global economy, with the majority of global GDP now relying on intangible goods, such as information-related services. This in turn has led to the generation of immense amounts of data, more than humanity has ever seen in its history. With 95% of this data being generated by the current generation, Professor Floridi believes that we are becoming overwhelmed by this data, and that our agency as humans is being eroded as a result. 

According to Professor Floridi, the digital divide has caused a lack of balance between technological growth and our understanding of this growth. He believes that the infosphere is becoming polluted and the manifold of the infosphere is increasingly determined by technology and AI. Identifying, anticipating and resolving these problems has become essential, and Professor Floridi has dedicated his research to the Philosophy of Information, Philosophy of Technology and Digital Ethics. 

We must equip ourselves with a viable philosophy of information to help us better understand and address the risks of this new information age. Professor Floridi is leading the charge, and his research on Digital Ethics, the Philosophy of Information and the Philosophy of Technology is helping us to better anticipate, identify and resolve problems caused by the digital divide.

TOC:

[00:00:00] Introduction to Luciano and his ideas

[00:14:00] Chat GPT / language models

[00:28:45] AI risk / "Singularitarians" 

[00:37:15] Forms of governance

[00:43:56] Re-ontologising the world

[00:55:56] It from bit and Computationalism and philosophy without purpose

[01:03:05] Getting into Digital Ethics


Interviewer: Dr. Tim Scarfe


References:

GPT‐3: Its Nature, Scope, Limits, and Consequences [Floridi]

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3827044


Ultraintelligent Machines, Singularity, and Other Sci-fi Distractions about AI [Floridi]

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4222347


The Philosophy of Information [Floridi]

https://www.amazon.co.uk/Philosophy-Information-Luciano-Floridi/dp/0199232393


Information: A Very Short Introduction [Floridi]

https://www.amazon.co.uk/Information-Very-Short-Introduction-Introductions/dp/0199551375


https://en.wikipedia.org/wiki/Luciano_Floridi

https://www.philosophyofinformation.net/

Feb 03, 202301:06:13
#98 - Prof. LUCIANO FLORIDI - ChatGPT, Superintelligence, Ethics, Philosophy of Information

#98 - Prof. LUCIANO FLORIDI - ChatGPT, Superintelligence, Ethics, Philosophy of Information

Support us! https://www.patreon.com/mlst

MLST Discord: https://discord.gg/aNPkGUQtc5

YT version: https://youtu.be/YLNGvvgq3eg

(If music annoying, skip to main interview @ 14:14)

We are living in an age of rapid technological advancement, and with this growth comes a digital divide. Professor Luciano Floridi of the Oxford Internet Institute / Oxford University believes that this divide not only affects our understanding of the implications of this new age, but also the organization of a fair society. 

The Information Revolution has been transforming the global economy, with the majority of global GDP now relying on intangible goods, such as information-related services. This in turn has led to the generation of immense amounts of data, more than humanity has ever seen in its history. With 95% of this data being generated by the current generation, Professor Floridi believes that we are becoming overwhelmed by this data, and that our agency as humans is being eroded as a result. 

According to Professor Floridi, the digital divide has caused a lack of balance between technological growth and our understanding of this growth. He believes that the infosphere is becoming polluted and the manifold of the infosphere is increasingly determined by technology and AI. Identifying, anticipating and resolving these problems has become essential, and Professor Floridi has dedicated his research to the Philosophy of Information, Philosophy of Technology and Digital Ethics. 

We must equip ourselves with a viable philosophy of information to help us better understand and address the risks of this new information age. Professor Floridi is leading the charge, and his research on Digital Ethics, the Philosophy of Information and the Philosophy of Technology is helping us to better anticipate, identify and resolve problems caused by the digital divide.


TOC:

[00:00:00] Introduction to Luciano and his ideas

[00:14:40] Chat GPT / language models

[00:29:24] AI risk / "Singularitarians" 

[00:30:34] Re-ontologising the world

[00:56:35] It from bit and Computationalism and philosophy without purpose

[01:03:43] Getting into Digital Ethics


References:

GPT‐3: Its Nature, Scope, Limits, and Consequences [Floridi]

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3827044


Ultraintelligent Machines, Singularity, and Other Sci-fi Distractions about AI [Floridi]

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4222347


The Philosophy of Information [Floridi]

https://www.amazon.co.uk/Philosophy-Information-Luciano-Floridi/dp/0199232393


Information: A Very Short Introduction [Floridi]

https://www.amazon.co.uk/Information-Very-Short-Introduction-Introductions/dp/0199551375


https://en.wikipedia.org/wiki/Luciano_Floridi

https://www.philosophyofinformation.net/

Feb 03, 202301:06:52
#97 SREEJAN KUMAR - Human Inductive Biases in Machines from Language

#97 SREEJAN KUMAR - Human Inductive Biases in Machines from Language

Research has shown that humans possess strong inductive biases which enable them to quickly learn and generalize. In order to instill these same useful human inductive biases into machines, a paper was presented by Sreejan Kumar at the NeurIPS conference which won the Outstanding Paper of the Year award. The paper is called Using Natural Language and Program Abstractions to Instill Human Inductive Biases in Machines.

This paper focuses on using a controlled stimulus space of two-dimensional binary grids to define the space of abstract concepts that humans have and a feedback loop of collaboration between humans and machines to understand the differences in human and machine inductive biases. 

It is important to make machines more human-like to collaborate with them and understand their behavior. Synthesised discrete programs running on a turing machine computational model instead of a neural network substrate offers promise for the future of artificial intelligence. Neural networks and program induction should both be explored to get a well-rounded view of intelligence which works in multiple domains, computational substrates and which can acquire a diverse set of capabilities.

Natural language understanding in models can also be improved by instilling human language biases and programs into AI models. Sreejan used an experimental framework consisting of two dual task distributions, one generated from human priors and one from machine priors, to understand the differences in human and machine inductive biases. Furthermore, he demonstrated that compressive abstractions can be used to capture the essential structure of the environment for more human-like behavior. This means that emergent language-based inductive priors can be distilled into artificial neural networks, and AI  models can be aligned to the us, world and indeed, our values.

Humans possess strong inductive biases which enable them to quickly learn to perform various tasks. This is in contrast to neural networks, which lack the same inductive biases and struggle to learn them empirically from observational data, thus, they have difficulty generalizing to novel environments due to their lack of prior knowledge. 

Sreejan's results showed that when guided with representations from language and programs, the meta-learning agent not only improved performance on task distributions humans are adept at, but also decreased performa on control task distributions where humans perform poorly. This indicates that the abstraction supported by these representations, in the substrate of language or indeed, a program, is key in the development of aligned artificial agents with human-like generalization, capabilities, aligned values and behaviour.


References

Using natural language and program abstractions to instill human inductive biases in machines [Kumar et al/NEURIPS]

https://openreview.net/pdf?id=buXZ7nIqiwE


Core Knowledge [Elizabeth S. Spelke / Harvard]

https://www.harvardlds.org/wp-content/uploads/2017/01/SpelkeKinzler07-1.pdf


The Debate Over Understanding in AI's Large Language Models [Melanie Mitchell]

https://arxiv.org/abs/2210.13966


On the Measure of Intelligence [Francois Chollet]

https://arxiv.org/abs/1911.01547


ARC challenge [Chollet]

https://github.com/fchollet/ARC

Jan 28, 202324:58