Skip to main content
Towards Data Science

Towards Data Science

By The TDS team

Note: The TDS podcast's current run has ended.

Researchers and business leaders at the forefront of the field unpack the most pressing questions around data science and AI.
Available on
Apple Podcasts Logo
Google Podcasts Logo
Overcast Logo
Pocket Casts Logo
PodBean Logo
RadioPublic Logo
Spotify Logo
Currently playing episode

38. Matthew Stewart - Data privacy and machine learning in environmental science

Towards Data ScienceJun 17, 2020

00:00
31:22
130. Edouard Harris - New Research: Advanced AI may tend to seek power *by default*

130. Edouard Harris - New Research: Advanced AI may tend to seek power *by default*

Progress in AI has been accelerating dramatically in recent years, and even months. It seems like every other day, there’s a new, previously-believed-to-be-impossible feat of AI that’s achieved by a world-leading lab. And increasingly, these breakthroughs have been driven by the same, simple idea: AI scaling.

For those who haven’t been following the AI scaling sage, scaling means training AI systems with larger models, using increasingly absurd quantities of data and processing power. So far, empirical studies by the world’s top AI labs seem to suggest that scaling is an open-ended process that can lead to more and more capable and intelligent systems, with no clear limit.

And that’s led many people to speculate that scaling might usher in a new era of broadly human-level or even superhuman AI — the holy grail AI researchers have been after for decades.

And while that might sound cool, an AI that can solve general reasoning problems as well as or better than a human might actually be an intrinsically dangerous thing to build.

At least, that’s the conclusion that many AI safety researchers have come to following the publication of a new line of research that explores how modern AI systems tend to solve problems, and whether we should expect more advanced versions of them to perform dangerous behaviours like seeking power.

This line of research in AI safety is called “power-seeking”, and although it’s currently not well understood outside the frontier of AI safety and AI alignment research, it’s starting to draw a lot of attention. The first major theoretical study of power seeking was led by Alex Turner, who’s appeared on the podcast before, and was published in NeurIPS (the world’s top AI conference), for example.

And today, we’ll be hearing from Edouard Harris, an AI alignment researcher and one of my co-founders in the AI safety company (Gladstone AI). Ed’s just completed a significant piece of AI safety research that extends Alex Turner’s original power-seeking work, and that shows what seems to be the first experimental evidence suggesting that we should expect highly advanced AI systems to seek power by default.

What does power seeking really mean though? And does all this imply for the safety of future, general-purpose reasoning systems? That’s what this episode will be all about.

***

Intro music:

- Artist: Ron Gelinas

- Track Title: Daybreak Chill Blend (original mix)

- Link to Track: https://youtu.be/d8Y2sKIgFWc

*** 

Chapters:

- 0:00 Intro

- 4:00 Alex Turner's research

- 7:45 What technology wants

- 11:30 Universal goals

- 17:30 Connecting observations

- 24:00 Micro power seeking behaviour

- 28:15 Ed's research

- 38:00 The human as the environment

- 42:30 What leads to power seeking

- 48:00 Competition as a default outcome

- 52:45 General concern

- 57:30 Wrap-up

Oct 12, 202258:22
129. Amber Teng - Building apps with a new generation of language models

129. Amber Teng - Building apps with a new generation of language models

It’s no secret that a new generation of powerful and highly scaled language models is taking the world by storm. Companies like OpenAI, AI21Labs, and Cohere have built models so versatile that they’re powering hundreds of new applications, and unlocking entire new markets for AI-generated text.

In light of that, I thought it would be worth exploring the applied side of language modelling — to dive deep into one specific language model-powered tool, to understand what it means to build apps on top of scaled AI systems. How easily can these models be used in the wild? What bottlenecks and challenges do people run into when they try to build apps powered by large language models? That’s what I wanted to find out.

My guest today is Amber Teng, and she’s a data scientist who recently published a blog that got quite a bit of attention, about a resume cover letter generator that she created using GPT-3, OpenAI’s powerful and now-famous language model. I thought her project would be make for a great episode, because it exposes so many of the challenges and opportunities that come with the new era of powerful language models that we’ve just entered.

So today we’ll be exploring exactly that: looking at the applied side of language modelling and prompt engineering, understanding how large language models have made new apps not only possible but also much easier to build, and the likely future of AI-powered products.

***

Intro music:

- Artist: Ron Gelinas

- Track Title: Daybreak Chill Blend (original mix)

- Link to Track: https://youtu.be/d8Y2sKIgFWc

***

Chapters:

- 0:00 Intro

- 2:30 Amber’s background

- 5:30 Using GPT-3

- 14:45 Building prompts up

- 18:15 Prompting best practices

- 21:45 GPT-3 mistakes

- 25:30 Context windows

- 30:00 End-to-end time

- 34:45 The cost of one cover letter

- 37:00 The analytics

- 41:45 Dynamics around company-building

- 46:00 Commoditization of language modelling

- 51:00 Wrap-up

Oct 05, 202251:22
128. David Hirko - AI observability and data as a cybersecurity weakness

128. David Hirko - AI observability and data as a cybersecurity weakness

Imagine you’re a big hedge fund, and you want to go out and buy yourself some data. Data is really valuable for you — it’s literally going to shape your investment decisions and determine your outcomes.

But the moment you receive your data, a cold chill runs down your spine: how do you know your data supplier gave you the data they said they would? From your perspective, you’re staring down 100,000 rows in a spreadsheet, with no way to tell if half of them were made up — or maybe more for that matter.

This might seem like an obvious problem in hindsight, but it’s one most of us haven’t even thought of. We tend to assume that data is data, and that 100,000 rows in a spreadsheet is 100,000 legitimate samples.

The challenge of making sure you’re dealing with high-quality data, or at least that you have the data you think you do, is called data observability, and it’s surprisingly difficult to solve for at scale. In fact, there are now entire companies that specialize in exactly that — one of which is Zectonal, whose co-founder Dave Hirko will be joining us for today’s episode of the podcast.

Dave has spent his career understanding how to evaluate and monitor data at massive scale. He did that first at AWS in the early days of cloud computing, and now through Zectonal, where he’s working on strategies that allow companies to detect issues with their data — whether they’re caused by intentional data poisoning, or unintentional data quality problems. Dave joined me to talk about data observability, data as a new vector for cyberattacks, and the future of enterprise data management on this episode of the TDS podcast.

***

Intro music:

- Artist: Ron Gelinas

- Track Title: Daybreak Chill Blend (original mix)

- Link to Track: https://youtu.be/d8Y2sKIgFWc

*** Chapters:
  • 0:00 Intro
  • 3:00 What is data observability?
  • 10:45 “Funny business” with data providers
  • 12:50 Data supply chains
  • 16:50 Various cybersecurity implications
  • 20:30 Deep data inspection
  • 27:20 Observed direction of change
  • 34:00 Steps the average person can take
  • 41:15 Challenges with GDPR transitions
  • 48:45 Wrap-up
Sep 28, 202249:03
127. Matthew Stewart - The emerging world of ML sensors

127. Matthew Stewart - The emerging world of ML sensors

Today, we live in the era of AI scaling. It seems like everywhere you look people are pushing to make large language models larger, or more multi-modal and leveraging ungodly amounts of processing power to do it.

But although that’s one of the defining trends of the modern AI era, it’s not the only one. At the far opposite extreme from the world of hyperscale transformers and giant dense nets is the fast-evolving world of TinyML, where the goal is to pack AI systems onto small edge devices.

My guest today is Matthew Stewart, a deep learning and TinyML researcher at Harvard University, where he collaborates with the world’s leading IoT and TinyML experts on projects aimed at getting small devices to do big things with AI. Recently, along with his colleagues, Matt co-authored a paper that introduced a new way of thinking about sensing.

The idea is to tightly integrate machine learning and sensing on one device. For example, today we might have a sensor like a camera embedded on an edge device, and that camera would have to send data about all the pixels in its field of view back to a central server that might take that data and use it to perform a task like facial recognition. But that’s not great because it involves sending potentially sensitive data — in this case, images of people’s faces — from an edge device to a server, introducing security risks.

So instead, what if the camera’s output was processed on the edge device itself, so that all that had to be sent to the server was much less sensitive information, like whether or not a given face was detected? These systems — where edge devices harness onboard AI, and share only processed outputs with the rest of the world — are what Matt and his colleagues call ML sensors.

ML sensors really do seem like they’ll be part of the future, and they introduce a host of challenging ethical, privacy, and operational questions that I discussed with Matt on this episode of the TDS podcast.

*** 

Intro music:

- Artist: Ron Gelinas

- Track Title: Daybreak Chill Blend (original mix)

- Link to Track: https://youtu.be/d8Y2sKIgFWc

***

Chapters:

- 3:20 Special challenges with TinyML

- 9:00 Most challenging aspects of Matt’s work

- 12:30 ML sensors

- 21:30 Customizing the technology

- 24:45 Data sheets and ML sensors

- 31:30 Customers with their own custom software

- 36:00 Access to the algorithm

- 40:30 Wrap-up

Sep 21, 202241:34
126. JR King - Does the brain run on deep learning?

126. JR King - Does the brain run on deep learning?

Deep learning models — transformers in particular — are defining the cutting edge of AI today. They’re based on an architecture called an artificial neural network, as you probably already know if you’re a regular Towards Data Science reader. And if you are, then you might also already know that as their name suggests, artificial neural networks were inspired by the structure and function of biological neural networks, like those that handle information processing in our brains.

So it’s a natural question to ask: how far does that analogy go? Today, deep neural networks can master an increasingly wide range of skills that were historically unique to humans — skills like creating images, or using language, planning, playing video games, and so on. Could that mean that these systems are processing information like the human brain, too?

To explore that question, we’ll be talking to JR King, a CNRS researcher at the Ecole Normale Supérieure, affiliated with Meta AI, where he leads the Brain & AI group. There, he works on identifying the computational basis of human intelligence, with a focus on language. JR is a remarkably insightful thinker, who’s spent a lot of time studying biological intelligence, where it comes from, and how it maps onto artificial intelligence. And he joined me to explore the fascinating intersection of biological and artificial information processing on this episode of the TDS podcast.

***

Intro music:

- Artist: Ron Gelinas

- Track Title: Daybreak Chill Blend (original mix)

- Link to Track: https://youtu.be/d8Y2sKIgFWc 

***

Chapters:
  • 2:30 What is JR’s day-to-day?
  • 5:00 AI and neuroscience
  • 12:15 Quality of signals within the research
  • 21:30 Universality of structures
  • 28:45 What makes up a brain?
  • 37:00 Scaling AI systems
  • 43:30 Growth of the human brain
  • 48:45 Observing certain overlaps
  • 55:30 Wrap-up
Sep 14, 202255:44
125. Ryan Fedasiuk - Can the U.S. and China collaborate on AI safety?

125. Ryan Fedasiuk - Can the U.S. and China collaborate on AI safety?

It’s no secret that the US and China are geopolitical rivals. And it’s also no secret that that rivalry extends into AI — an area both countries consider to be strategically critical.

But in a context where potentially transformative AI capabilities are being unlocked every few weeks, many of which lend themselves to military applications with hugely destabilizing potential, you might hope that the US and China would have robust agreements in place to deal with things like runaway conflict escalation triggered by an AI powered weapon that misfires. Even at the height of the cold war, the US and Russia had robust lines of communication to de-escalate potential nuclear conflicts, so surely the US and China have something at least as good in place now… right?

Well they don’t, and to understand the reason why — and what we should do about it — I’ll be speaking to Ryan Fedashuk, a Research Analyst at Georgetown University’s Center for Security and Emerging Technology and Adjunct Fellow at the Center for a New American Security. Ryan recently wrote a fascinating article for Foreign Policy Magazine, where he outlines the challenges and importance of US-China collaboration on AI safety. He joined me to talk about the U.S. and China’s shared interest in building safe AI, how reach side views the other, and what realistic China AI policy looks like on this episode of the TDs podcast.

Sep 07, 202248:19
124. Alex Watson - Synthetic data could change everything

124. Alex Watson - Synthetic data could change everything

There’s a website called thispersondoesnotexist.com. When you visit it, you’re confronted by a high-resolution, photorealistic AI-generated picture of a human face. As the website’s name suggests, there’s no human being on the face of the earth who looks quite like the person staring back at you on the page.

Each of those generated pictures are a piece of data that captures so much of the essence of what it means to look like a human being. And yet they do so without telling you anything whatsoever about any particular person. In that sense, it’s fully anonymous human face data.

That’s impressive enough, and it speaks to how far generative image models have come over the last decade. But what if we could do the same for any kind of data?

What if I could generate an anonymized set of medical records or financial transaction data that captures all of the latent relationships buried  in a private dataset, without the risk of leaking sensitive information about real people? That’s the mission of Alex Watson, the Chief Product Officer and co-founder of Gretel AI, where he works on unlocking value hidden in sensitive datasets in ways that preserve privacy.

What I realized talking to Alex was that synthetic data is about much more than ensuring privacy. As you’ll see over the course of the conversation, we may well be heading for a world where most data can benefit from augmentation via data synthesis — where synthetic data brings privacy value almost as a side-effect of enriching ground truth data with context imported from the wider world.

Alex joined me to talk about data privacy, data synthesis, and what could be the very strange future of the data lifecycle on this episode of the TDS podcast.

***

Intro music:

- Artist: Ron Gelinas

- Track Title: Daybreak Chill Blend (original mix)

- Link to Track: https://youtu.be/d8Y2sKIgFWc

***

Chapters:

  • 2:40 What is synthetic data?
  • 6:45 Large language models
  • 11:30 Preventing data leakage
  • 18:00 Generative versus downstream models
  • 24:10 De-biasing and fairness
  • 30:45 Using synthetic data
  • 35:00 People consuming the data
  • 41:00 Spotting correlations in the data
  • 47:45 Generalization of different ML algorithms
  • 51:15 Wrap-up
May 18, 202251:48
123. Ala Shaabana and Jacob Steeves - AI on the blockchain (it actually might just make sense)

123. Ala Shaabana and Jacob Steeves - AI on the blockchain (it actually might just make sense)

Two ML researchers with world-class pedigrees who decided to build a company that puts AI on the blockchain. Now to most people — myself included — “AI on the blockchain” sounds like a winning entry in some kind of startup buzzword bingo. But what I discovered talking to Jacob and Ala was that they actually have good reasons to combine those two ingredients together.

At a high level, doing AI on a blockchain allows you to decentralize AI research and reward labs for building better models, and not for publishing papers in flashy journals with often biased reviewers.

And that’s not all — as we’ll see, Ala and Jacob are taking on some of the thorniest current problems in AI with their decentralized approach to machine learning. Everything from the problem of designing robust benchmarks to rewarding good AI research and even the centralization of power in the hands of a few large companies building powerful AI systems — these problems are all in their sights as they build out Bittensor, their AI-on-the-blockchain-startup.

Ala and Jacob joined me to talk about all those things and more on this episode of the TDS podcast.

---

Intro music:

- Artist: Ron Gelinas

- Track Title: Daybreak Chill Blend (original mix)

- Link to Track: https://youtu.be/d8Y2sKIgFWc

---

Chapters:

  • 2:40 Ala and Jacob’s backgrounds
  • 4:00 The basics of AI on the blockchain
  • 11:30 Generating human value
  • 17:00 Who sees the benefit? 22:00 Use of GPUs
  • 28:00 Models learning from each other
  • 37:30 The size of the network
  • 45:30 The alignment of these systems
  • 51:00 Buying into a system
  • 54:00 Wrap-up
May 12, 202254:43
122. Sadie St. Lawrence - Trends in data science

122. Sadie St. Lawrence - Trends in data science

As you might know if you follow the podcast, we usually talk about the world of cutting-edge AI capabilities, and some of the emerging safety risks and other challenges that the future of AI might bring. But I thought that for today’s episode, it would be fun to change things up a bit and talk about the applied side of data science, and how the field has evolved over the last year or two.

And I found the perfect guest to do that with: her name is Sadie St. Lawrence, and among other things, she’s the founder of Women in Data — a community that helps women enter the field of data and advance throughout their careers — and she’s also the host of the Data Bytes podcast, a seasoned data scientist and a community builder extraordinaire. Sadie joined me to talk about her founder’s journey, what data science looks like today, and even the possibilities that blockchains introduce for data science on this episode of the towards data science podcast.

***

Intro music:

- Artist: Ron Gelinas

- Track Title: Daybreak Chill Blend (original mix)

- Link to Track: https://youtu.be/d8Y2sKIgFWc

***

Chapters:

  • 2:00 Founding Women in Data
  • 6:30 Having gendered conversations
  • 11:00 The cultural aspect
  • 16:45 Opportunities in blockchain
  • 22:00 The blockchain database
  • 32:30 Data science education
  • 37:00 GPT-3 and unstructured data
  • 39:30 Data science as a career
  • 42:50 Wrap-up
May 04, 202243:03
121. Alexei Baevski - data2vec and the future of multimodal learning

121. Alexei Baevski - data2vec and the future of multimodal learning

If the name data2vec sounds familiar, that’s probably because it made quite a splash on social and even traditional media when it came out, about two months ago. It’s an important entry in what is now a growing list of strategies that are focused on creating individual machine learning architectures that handle many different data types, like text, image and speech.

Most self-supervised learning techniques involve getting a model to take some input data (say, an image or a piece of text) and mask out certain components of those inputs (say by blacking out pixels or words) in order to get the models to predict those masked out components.

That “filling in the blanks” task is hard enough to force AIs to learn facts about their data that generalize well, but it also means training models to perform tasks that are very different depending on the input data type. Filling in blacked out pixels is quite different from filling in blanks in a sentence, for example.

So what if there was a way to come up with one task that we could use to train machine learning models on any kind of data? That’s where data2vec comes in.

For this episode of the podcast, I’m joined by Alexei Baevski, a researcher at Meta AI one of the creators of data2vec. In addition to data2vec, Alexei has been involved in quite a bit of pioneering work on text and speech models, including wav2vec, Facebook’s widely publicized unsupervised speech model. Alexei joined me to talk about how data2vec works and what’s next for that research direction, as well as the future of multi-modal learning.

*** 

Intro music:

- Artist: Ron Gelinas

- Track Title: Daybreak Chill Blend (original mix)

-  Link to Track: https://youtu.be/d8Y2sKIgFWc

***

Chapters:
  • 2:00 Alexei’s background
  • 10:00 Software engineering knowledge
  • 14:10 Role of data2vec in progression
  • 30:00 Delta between student and teacher
  • 38:30 Losing interpreting ability
  • 41:45 Influence of greater abilities
  • 49:15 Wrap-up
Apr 27, 202249:31
120. Liam Fedus and Barrett Zoph - AI scaling with mixture of expert models

120. Liam Fedus and Barrett Zoph - AI scaling with mixture of expert models

AI scaling has really taken off. Ever since GPT-3 came out, it’s become clear that one of the things we’ll need to do to move beyond narrow AI and towards more generally intelligent systems is going to be to massively scale up the size of our models, the amount of processing power they consume and the amount of data they’re trained on, all at the same time.

That’s led to a huge wave of highly scaled models that are incredibly expensive to train, largely because of their enormous compute budgets. But what if there was a more flexible way to scale AI — one that allowed us to decouple model size from compute budgets, so that we can track a more compute-efficient course to scale?

That’s the promise of so-called mixture of experts models, or MoEs. Unlike more traditional transformers, MoEs don’t update all of their parameters on every training pass. Instead, they route inputs intelligently to sub-models called experts, which can each specialize in different tasks. On a given training pass, only those experts have their parameters updated. The result is a sparse model, a more compute-efficient training process, and a new potential path to scale.

Google has been pushing the frontier of research on MoEs, and my two guests today in particular have been involved in pioneering work on that strategy (among many others!). Liam Fedus and Barrett Zoph are research scientists at Google Brain, and they joined me to talk about AI scaling, sparsity and the present and future of MoE models on this episode of the TDS podcast.

***

Intro music:

- Artist: Ron Gelinas

- Track Title: Daybreak Chill Blend (original mix)

- Link to Track: https://youtu.be/d8Y2sKIgFWc

***

Chapters:
  • 2:15 Guests’ backgrounds
  • 8:00 Understanding specialization
  • 13:45 Speculations for the future
  • 21:45 Switch transformer versus dense net
  • 27:30 More interpretable models
  • 33:30 Assumptions and biology
  • 39:15 Wrap-up
Apr 20, 202240:48
119. Jaime Sevilla - Projecting AI progress from compute trends

119. Jaime Sevilla - Projecting AI progress from compute trends

Apr 13, 202248:34
118. Angela Fan - Generating Wikipedia articles with AI

118. Angela Fan - Generating Wikipedia articles with AI

Generating well-referenced and accurate Wikipedia articles has always been an important problem: Wikipedia has essentially become the Internet's encyclopedia of record, and hundreds of millions of people use it do understand the world.

But over the last decade Wikipedia has also become a critical source of training data for data-hungry text generation models. As a result, any shortcomings in Wikipedia’s content are at risk of being amplified by the text generation tools of the future. If one type of topic or person is chronically under-represented in Wikipedia’s corpus, we can expect generative text models to mirror — or even amplify — that under-representation in their outputs.

Through that lens, the project of Wikipedia article generation is about much more than it seems — it’s quite literally about setting the scene for the language generation systems of the future, and empowering humans to guide those systems in more robust ways.

That’s why I wanted to talk to Meta AI researcher Angela Fan, whose latest project is focused on generating reliable, accurate, and structured Wikipedia articles. She joined me to talk about her work, the implications of high-quality long-form text generation, and the future of human/AI collaboration on this episode of the TDS podcast.

--- 

Intro music:

- Artist: Ron Gelinas

- Track Title: Daybreak Chill Blend (original mix)

- Link to Track: https://youtu.be/d8Y2sKIgFWc

---

Chapters:
  • 1:45 Journey into Meta AI
  • 5:45 Transition to Wikipedia
  • 11:30 How articles are generated
  • 18:00 Quality of text
  • 21:30 Accuracy metrics
  • 25:30 Risk of hallucinated facts
  • 30:45 Keeping up with changes
  • 36:15 UI/UX problems
  • 45:00 Technical cause of gender imbalance
  • 51:00 Wrap-up
Apr 06, 202251:45
117. Beena Ammanath - Defining trustworthy AI

117. Beena Ammanath - Defining trustworthy AI

Trustworthy AI is one of today’s most popular buzzwords. But although everyone seems to agree that we want AI to be trustworthy, definitions of trustworthiness are often fuzzy or inadequate. Maybe that shouldn’t be surprising: it’s hard to come up with a single set of standards that add up to “trustworthiness”, and that apply just as well to a Netflix movie recommendation as a self-driving car.

So maybe trustworthy AI needs to be thought of in a more nuanced way — one that reflects the intricacies of individual AI use cases. If that’s true, then new questions come up: who gets to define trustworthiness, and who bears responsibility when a lack of trustworthiness leads to harms like AI accidents, or undesired biases?

Through that lens, trustworthiness becomes a problem not just for algorithms, but for organizations. And that’s exactly the case that Beena Ammanath makes in her upcoming book, Trustworthy AI, which explores AI trustworthiness from a practical perspective, looking at what concrete steps companies can take to make their in-house AI work safer, better and more reliable. Beena joined me to talk about defining trustworthiness, explainability and robustness in AI, as well as the future of AI regulation and self-regulation on this episode of the TDS podcast.

Intro music:

- Artist: Ron Gelinas

- Track Title: Daybreak Chill Blend (original mix)

- Link to Track: https://youtu.be/d8Y2sKIgFWc

Chapters:
  • 1:55 Background and trustworthy AI
  • 7:30 Incentives to work on capabilities
  • 13:40 Regulation at the level of application domain
  • 16:45 Bridging the gap
  • 23:30 Level of cognition offloaded to the AI
  • 25:45 What is trustworthy AI?
  • 34:00 Examples of robustness failures
  • 36:45 Team diversity
  • 40:15 Smaller companies
  • 43:00 Application of best practices
  • 46:30 Wrap-up
Mar 30, 202246:46
116. Katya Sedova - AI-powered disinformation, present and future

116. Katya Sedova - AI-powered disinformation, present and future

Until recently, very few people were paying attention to the potential malicious applications of AI. And that made some sense: in an era where AIs were narrow and had to be purpose-built for every application, you’d need an entire research team to develop AI tools for malicious applications. Since it’s more profitable (and safer) for that kind of talent to work in the legal economy, AI didn’t offer much low-hanging fruit for malicious actors.

But today, that’s all changing. As AI becomes more flexible and general, the link between the purpose for which an AI was built and its potential downstream applications has all but disappeared. Large language models can be trained to perform valuable tasks, like supporting writers, translating between languages, or write better code. But a system that can write an essay can also write a fake news article, or power an army of humanlike text-generating bots.

More than any other moment in the history of AI, the move to scaled, general-purpose foundation models has shown how AI can be a double-edged sword. And now that these models exist, we have to come to terms with them, and figure out how to build societies that remain stable in the face of compelling AI-generated content, and increasingly accessible AI-powered tools with malicious use potential.

That’s why I wanted to speak with Katya Sedova, a former Congressional Fellow and Microsoft alumna who now works at Georgetown University’s Center for Security and Emerging Technology, where she recently co-authored some fascinating work exploring current and likely future malicious uses of AI. If you like this conversation I’d really recommend checking out her team’s latest report — it’s called “AI and the future of disinformation campaigns”.

Katya joined me to talk about malicious AI-powered chatbots, fake news generation and the future of AI-augmented influence campaigns on this episode of the TDS podcast.

***

Intro music:

➞ Artist: Ron Gelinas

➞ Track Title: Daybreak Chill Blend (original mix)

➞ Link to Track: https://youtu.be/d8Y2sKIgFWc

*** 

Chapters:
  • 2:40 Malicious uses of AI
  • 4:30 Last 10 years in the field
  • 7:50 Low handing fruit of automation
  • 14:30 Other analytics functions
  • 25:30 Authentic bots
  • 30:00 Influences of service businesses
  • 36:00 Race to the bottom
  • 42:30 Automation of systems
  • 50:00 Manufacturing norms
  • 52:30 Interdisciplinary conversations
  • 54:00 Wrap-up
Mar 23, 202254:24
115. Irina Rish - Out-of-distribution generalization

115. Irina Rish - Out-of-distribution generalization

Imagine, for example, an AI that’s trained to identify cows in images. Ideally, we’d want it to learn to detect cows based on their shape and colour. But what if the cow pictures we put in the training dataset always show cows standing on grass?

In that case, we have a spurious correlation between grass and cows, and if we’re not careful, our AI might learn to become a grass detector rather than a cow detector. Even worse, we could only realize that’s happened once we’ve deployed it in the real world and it runs into a cow that isn’t standing on grass for the first time.

So how do you build AI systems that can learn robust, general concepts that remain valid outside the context of their training data?

That’s the problem of out-of-distribution generalization, and it’s a central part of the research agenda of Irina Rish, a core member of the Mila— Quebec AI Research institute, and the Canadian Excellence Research Chair in Autonomous AI. Irina’s research explores many different strategies that aim to overcome the out-of-distribution problem, from empirical AI scaling efforts to more theoretical work, and she joined me to talk about just that on this episode of the podcast.

***

Intro music:

- Artist: Ron Gelinas

- Track Title: Daybreak Chill Blend (original mix)

- Link to Track: https://youtu.be/d8Y2sKIgFWc

***

Chapters:
  • 2:00 Research, safety, and generalization
  • 8:20 Invariant risk minimization
  • 15:00 Importance of scaling
  • 21:35 Role of language
  • 27:40 AGI and scaling
  • 32:30 GPT versus ResNet 50
  • 37:00 Potential revolutions in architecture
  • 42:30 Inductive bias aspect
  • 46:00 New risks
  • 49:30 Wrap-up
Mar 09, 202250:12
114. Sam Bowman - Are we *under-hyping* AI?

114. Sam Bowman - Are we *under-hyping* AI?

Google the phrase “AI over-hyped”, and you’ll find literally dozens of articles from the likes of Forbes, Wired, and Scientific American, all arguing that “AI isn’t really as impressive at it seems from the outside,” and “we still have a long way to go before we come up with *true* AI, don’t you know.”

Amusingly, despite the universality of the “AI is over-hyped” narrative, the statement that “We haven’t made as much progress in AI as you might think™️” is often framed as somehow being an edgy, contrarian thing to believe.

All that pressure not to over-hype AI research really gets to people — researchers included. And they adjust their behaviour accordingly: they over-hedge their claims, cite outdated and since-resolved failure modes of AI systems, and generally avoid drawing straight lines between points that clearly show AI progress exploding across the board. All, presumably, to avoid being perceived as AI over-hypers.

Why does this matter? Well for one, under-hyping AI allows us to stay asleep — to delay answering many of the fundamental societal questions that come up when widespread automation of labour is on the table. But perhaps more importantly, it reduces the perceived urgency of addressing critical problems in AI safety and AI alignment.

Yes, we need to be careful that we’re not over-hyping AI. “AI startups” that don’t use AI are a problem. Predictions that artificial general intelligence is almost certainly a year away are a problem. Confidently prophesying major breakthroughs over short timescales absolutely does harm the credibility of the field.

But at the same time, we can’t let ourselves be so cautious that we’re not accurately communicating the true extent of AI’s progress and potential. So what’s the right balance?

That’s where Sam Bowman comes in. Sam is a professor at NYU, where he does research on AI and language modeling. But most important for today’s purposes, he’s the author of a paper titled, “When combating AI hype, proceed with caution,” in which he explores a trend he calls under-claiming — a common practice among researchers that consists of under-stating the extent of current AI capabilities, and over-emphasizing failure modes in ways that can be (unintentionally) deceptive.

Sam joined me to talk about under-claiming and what it means for AI progress on this episode of the Towards Data Science podcast.

***

Intro music

- Artist: Ron Gelinas

- Track Title: Daybreak Chill Blend (original mix)

- Link to Track: https://youtu.be/d8Y2sKIgFWc 

***

Chapters: 
  • 2:15 Overview of the paper
  • 8:50 Disappointing systems
  • 13:05 Potential double standard
  • 19:00 Moving away from multi-modality
  • 23:50 Overall implications
  • 28:15 Pressure to publish or perish
  • 32:00 Announcement discrepancies
  • 36:15 Policy angle
  • 41:00 Recommendations
  • 47:20 Wrap-up
Mar 02, 202247:49
113. Yaron Singer - Catching edge cases in AI

113. Yaron Singer - Catching edge cases in AI

It’s no secret that AI systems are being used in more and more high-stakes applications. As AI eats the world, it’s becoming critical to ensure that AI systems behave robustly — that they don’t get thrown off by unusual inputs, and start spitting out harmful predictions or recommending dangerous courses of action. If we’re going to have AI drive us to work, or decide who gets bank loans and who doesn’t, we’d better be confident that our AI systems aren’t going to fail because of a freak blizzard, or because some intern missed a minus sign.

We’re now past the point where companies can afford to treat AI development like a glorified Kaggle competition, in which the only thing that matters is how well models perform on a testing set. AI-powered screw-ups aren’t always life-or-death issues, but they can harm real users, and cause brand damage to companies that don’t anticipate them.

Fortunately, AI risk is starting to get more attention these days, and new companies — like Robust Intelligence — are stepping up to develop strategies that anticipate AI failures, and mitigate their effects. Joining me for this episode of the podcast was Yaron Singer, a former Googler, professor of computer science and applied math at Harvard, and now CEO and co-founder of Robust Intelligence. Yaron has the rare combination of theoretical and engineering expertise required to understand what AI risk is, and the product intuition to know how to integrate that understanding into solutions that can help developers and companies deal with AI risk.

--- 

Intro music:

➞ Artist: Ron Gelinas

➞ Track Title: Daybreak Chill Blend (original mix)

➞ Link to Track: https://youtu.be/d8Y2sKIgFWc

--- 

Chapters:
  • 0:00 Intro
  • 2:30 Journey into AI risk
  • 5:20 Guarantees of AI systems
  • 11:00 Testing as a solution
  • 15:20 Generality and software versus custom work
  • 18:55 Consistency across model types
  • 24:40 Different model failures
  • 30:25 Levels of responsibility
  • 35:00 Wrap-up
Feb 09, 202235:20
112. Tali Raveh - AI, single cell genomics, and the new era of computational biology

112. Tali Raveh - AI, single cell genomics, and the new era of computational biology

Until very recently, the study of human disease involved looking at big things — like organs or macroscopic systems — and figuring out when and how they can stop working properly. But that’s all started to change: in recent decades, new techniques have allowed us to look at disease in a much more detailed way, by examining the behaviour and characteristics of single cells.

One class of those techniques now known as single-cell genomics — the study of gene expression and function at the level of single cells. Single-cell genomics is creating new, high-dimensional datasets consisting of tens of millions of cells whose gene expression profiles and other characteristics have been painstakingly measured. And these datasets are opening up exciting new opportunities for AI-powered drug discovery — opportunities that startups are now starting to tackle head-on.

Joining me for today’s episode is Tali Raveh, Senior Director of Computational Biology at Immunai, a startup that’s using single-cell level data to perform high resolution profiling of the immune system at industrial scale. Tali joined me to talk about what makes the immune system such an exciting frontier for modern medicine, and how single-cell data and AI might be poised to generate unprecedented breakthroughs in disease treatment on this episode of the TDS podcast.

---

Intro music:

➞ Artist: Ron Gelinas

➞ Track Title: Daybreak Chill Blend (original mix)

➞ Link to Track: https://youtu.be/d8Y2sKIgFWc

--- 

Chapters:

0:00 Intro

2:00 Tali’s background

4:00 Immune systems and modern medicine

14:40 Data collection technology

19:00 Exposing cells to different drugs

24:00 Labeled and unlabelled data

27:30 Dataset status

31:30 Recent algorithmic advances

36:00 Cancer and immunology

40:00 The next few years

41:30 Wrap-up

Feb 02, 202242:04
111. Mo Gawdat - Scary Smart: A former Google exec’s perspective on AI risk

111. Mo Gawdat - Scary Smart: A former Google exec’s perspective on AI risk

If you were scrolling through your newsfeed in late September 2021, you may have caught this splashy headline from The Times of London that read, “Can this man save the world from artificial intelligence?”. The man in question was Mo Gawdat, an entrepreneur and senior tech executive who spent several years as the Chief Business Officer at GoogleX (now called X Development), Google’s semi-secret research facility, that experiments with moonshot projects like self-driving cars, flying vehicles, and geothermal energy. At X, Mo was exposed to the absolute cutting edge of many fields — one of which was AI. His experience seeing AI systems learn and interact with the world raised red flags for him — hints of the potentially disastrous failure modes of the AI systems we might just end up with if we don’t get our act together now.

Mo writes about his experience as an insider at one of the world’s most secretive research labs and how it led him to worry about AI risk, but also about AI’s promise and potential in his new book, Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World. He joined me to talk about just that on this episode of the TDS podcast.

Jan 26, 202201:00:12
110. Alex Turner - Will powerful AIs tend to seek power?

110. Alex Turner - Will powerful AIs tend to seek power?

Today’s episode is somewhat special, because we’re going to be talking about what might be the first solid quantitative study of the power-seeking tendencies that we can expect advanced AI systems to have in the future.

For a long time, there’s kind of been this debate in the AI safety world, between:

  • People who worry that powerful AIs could eventually displace, or even eliminate humanity altogether as they find more clever, creative and dangerous ways to optimize their reward metrics on the one hand, and
  • People who say that’s Terminator-bating Hollywood nonsense that anthropomorphizes machines in a way that’s unhelpful and misleading.

Unfortunately, recent work in AI alignment — and in particular, a spotlighted 2021 NeurIPS paper — suggests that the AI takeover argument might be stronger than many had realized. In fact, it’s starting to look like we ought to expect to see power-seeking behaviours from highly capable AI systems by default. These behaviours include things like AI systems preventing us from shutting them down, repurposing resources in pathological ways to serve their objectives, and even in the limit, generating catastrophes that would put humanity at risk.

As concerning as these possibilities might be, it’s exciting that we’re starting to develop a more robust and quantitative language to describe AI failures and power-seeking. That’s why I was so excited to sit down with AI researcher Alex Turner, the author of the spotlighted NeurIPS paper on power-seeking, and discuss his path into AI safety, his research agenda and his perspective on the future of AI on this episode of the TDS podcast.

***

Intro music:

➞ Artist: Ron Gelinas

➞ Track Title: Daybreak Chill Blend (original mix)

➞ Link to Track: https://youtu.be/d8Y2sKIgFWc

***

Chapters: 

- 2:05 Interest in alignment research

- 8:00 Two camps of alignment research

- 13:10 The NeurIPS paper

- 17:10 Optimal policies

- 25:00 Two-piece argument

- 28:30 Relaxing certain assumptions

- 32:45 Objections to the paper

- 39:00 Broader sense of optimization

- 46:35 Wrap-up

Jan 19, 202246:57
109. Danijar Hafner - Gaming our way to AGI

109. Danijar Hafner - Gaming our way to AGI

Until recently, AI systems have been narrow — they’ve only been able to perform the specific tasks that they were explicitly trained for. And while narrow systems are clearly useful, the holy grain of AI is to build more flexible, general systems.

But that can’t be done without good performance metrics that we can optimize for — or that we can at least use to measure generalization ability. Somehow, we need to figure out what number needs to go up in order to bring us closer to generally-capable agents. That’s the question we’ll be exploring on this episode of the podcast, with Danijar Hafner. Danijar is a PhD student in artificial intelligence at the University of Toronto with Jimmy Ba and Geoffrey Hinton and researcher at Google Brain and the Vector Institute.

Danijar has been studying the problem of performance measurement and benchmarking for RL agents with generalization abilities. As part of that work, he recently released Crafter, a tool that can procedurally generate complex environments that are a lot like Minecraft, featuring resources that need to be collected, tools that can be developed, and enemies who need to be avoided or defeated. In order to succeed in a Crafter environment, agents need to robustly plan, explore and test different strategies, which allow them to unlock certain in-game achievements.

Crafter is part of a growing set of strategies that researchers are exploring to figure out how we can benchmark and measure the performance of general-purpose AIs, and it also tells us something interesting about the state of AI: increasingly, our ability to define tasks that require the right kind of generalization abilities is becoming just as important as innovating on AI model architectures. Danijar joined me to talk about Crafter, reinforcement learning, and the big challenges facing AI researchers as they work towards general intelligence on this episode of the TDS podcast.

***

Intro music:

- Artist: Ron Gelinas

- Track Title: Daybreak Chill Blend (original mix)

- Link to Track: https://youtu.be/d8Y2sKIgFWc

***

Chapters:
  • 0:00 Intro
  • 2:25 Measuring generalization
  • 5:40 What is Crafter?
  • 11:10 Differences between Crafter and Minecraft
  • 20:10 Agent behavior
  • 25:30 Merging scaled models and reinforcement learning
  • 29:30 Data efficiency
  • 38:00 Hierarchical learning
  • 43:20 Human-level systems
  • 48:40 Cultural overlap
  • 49:50 Wrap-up
Jan 12, 202250:06
108. Last Week In AI — 2021: The (full) year in review
Jan 05, 202250:22
107. Kevin Hu - Data observability and why it matters

107. Kevin Hu - Data observability and why it matters

Imagine for a minute that you’re running a profitable business, and that part of your sales strategy is to send the occasional mass email to people who’ve signed up to be on your mailing list. For a while, this approach leads to a reliable flow of new sales, but then one day, that abruptly stops. What happened?

You pour over logs, looking for an explanation, but it turns out that the problem wasn’t with your software; it was with your data. Maybe the new intern accidentally added a character to every email address in your dataset, or shuffled the names on your mailing list so that Christina got a message addressed to “John”, or vice-versa. Versions of this story happen surprisingly often, and when they happen, the cost can be significant: lost revenue, disappointed customers, or worse — an irreversible loss of trust.

Today, entire products are being built on top of datasets that aren’t monitored properly for critical failures — and an increasing number of those products are operating in high-stakes situations. That’s why data observability is so important: the ability to  track the origin, transformations and characteristics of mission-critical data to detect problems before they lead to downstream harm.

And it’s also why we’ll be talking to Kevin Hu, the co-founder and CEO of Metaplane, one of the world’s first data observability startups. Kevin has a deep understanding of data pipelines, and the problems that cap pop up if you they aren’t properly monitored. He joined me to talk about data observability, why it matters, and how it might be connected to responsible AI on this episode of the TDS podcast.

Intro music:

➞ Artist: Ron Gelinas

➞ Track Title: Daybreak Chill Blend (original mix)

➞ Link to Track: https://youtu.be/d8Y2sKIgFWc 0:00

Chapters

  • 0:00 Intro
  • 2:00 What is data observability?
  • 8:20 Difference between a dataset’s internal and external characteristics
  • 12:20 Why is data so difficult to log?
  • 17:15 Tracing back models
  • 22:00 Algorithmic analyzation of a date
  • 26:30 Data ops in five years
  • 33:20 Relation to cutting-edge AI work
  • 39:25 Software engineering and startup funding
  • 42:05 Problems on a smaller scale
  • 46:40 Future data ops problems to solve
  • 48:45 Wrap-up
Dec 15, 202149:56
106. Yang Gao - Sample-efficient AI

106. Yang Gao - Sample-efficient AI

Historically, AI systems have been slow learners. For example, a computer vision model often needs to see tens of thousands of hand-written digits before it can tell a 1 apart from a 3. Even game-playing AIs like DeepMind’s AlphaGo, or its more recent descendant MuZero, need far more experience than humans do to master a given game.

So when someone develops an algorithm that can reach human-level performance at anything as fast as a human can, it’s a big deal. And that’s exactly why I asked Yang Gao to join me on this episode of the podcast. Yang is an AI researcher with affiliations at Berkeley and Tsinghua University, who recently co-authored a paper introducing EfficientZero: a reinforcement learning system that learned to play Atari games at the human-level after just two hours of in-game experience. It’s a tremendous breakthrough in sample-efficiency, and a major milestone in the development of more general and flexible AI systems.

--- 

Intro music:

➞ Artist: Ron Gelinas

➞ Track Title: Daybreak Chill Blend (original mix)

➞ Link to Track: https://youtu.be/d8Y2sKIgFWc

---

Chapters: 

- 0:00 Intro

- 1:50 Yang’s background

- 6:00 MuZero’s activity

- 13:25 MuZero to EfficiantZero

- 19:00 Sample efficiency comparison

- 23:40 Leveraging algorithmic tweaks

- 27:10 Importance of evolution to human brains and AI systems

- 35:10 Human-level sample efficiency

- 38:28 Existential risk from AI in China

- 47:30 Evolution and language

- 49:40 Wrap-up

Dec 08, 202149:53
105. Yannic Kilcher - A 10,000-foot view of AI
Dec 01, 202101:03:05
104. Ken Stanley - AI without objectives

104. Ken Stanley - AI without objectives

Today, most machine learning algorithms use the same paradigm: set an objective, and train an agent, a neural net, or a classical model to perform well against that objective. That approach has given good results: these types of AI can hear, speak, write, read, draw, drive and more.

But they’re also inherently limited: because they optimize for objectives that seem interesting to humans, they often avoid regions of parameter space that are valuable, but that don’t immediately seem interesting to human beings, or the objective functions we set. That poses a challenge for researchers like Ken Stanley, whose goal is to build broadly superintelligent AIs — intelligent systems that outperform humans at a wide range of tasks. Among other things, Ken is a former startup founder and AI researcher, whose career has included work in academia, at UberAI labs, and most recently at OpenAI, where he leads the open-ended learning team.

Ken joined me to talk about his 2015 book Greatness Cannot Be Planned: The Myth of the Objective, what open-endedness could mean for humanity, the future of intelligence, and even AI safety on this episode of the TDS podcast.

Nov 24, 202101:06:27
103. Gillian Hadfield - How to create explainable AI regulations that actually make sense

103. Gillian Hadfield - How to create explainable AI regulations that actually make sense

It’s no secret that governments around the world are struggling to come up with effective policies to address the risks and opportunities that AI presents. And there are many reasons why that’s happening: many people — including technical people — think they understand what frontier AI looks like, but very few actually do, and even fewer are interested in applying their understanding in a government context, where salaries are low and stock compensation doesn’t even exist.

So there’s a critical policy-technical gap that needs bridging, and failing to address that gap isn’t really an option: it would mean flying blind through the most important test of technological governance the world has ever faced. Unfortunately, policymakers have had to move ahead with regulating and legislating with that dangerous knowledge gap in place, and the result has been less-than-stellar: widely criticized definitions of privacy and explainability, and definitions of AI that create exploitable loopholes are among some of the more concerning results.

Enter Gillian Hadfield, a Professor of Law and Professor of Strategic Management and Director of the Schwartz Reisman Institute for Technology and Society. Gillian’s background is in law and economics, which has led her to AI policy, and definitional problems with recent and emerging regulations on AI and privacy. But — as I discovered during the podcast — she also happens to be related to Dyllan Hadfield-Menell, an AI alignment researcher whom we’ve had on the show before. Partly through Dyllan, Gillian has also been exploring how principles of AI alignment research can be applied to AI policy, and to contract law. Gillian joined me to talk about all that and more on this episode of the podcast.

---

Intro music:

- Artist: Ron Gelinas

- Track Title: Daybreak Chill Blend (original mix)

- Link to Track: https://youtu.be/d8Y2sKIgFWc 

---

Chapters:
  • 1:35 Gillian’s background
  • 8:44 Layers and governments’ legislation
  • 13:45 Explanations and justifications
  • 17:30 Explainable humans
  • 24:40 Goodhart’s Law
  • 29:10 Bringing in AI alignment
  • 38:00 GDPR
  • 42:00 Involving technical folks
  • 49:20 Wrap-up
Nov 17, 202151:07
102. Wendy Foster - AI ethics as a user experience challenge

102. Wendy Foster - AI ethics as a user experience challenge

AI ethics is often treated as a dry, abstract academic subject. It doesn’t have the kinds of consistent, unifying principles that you might expect from a quantitative discipline like computer science or physics.

But somehow, the ethics rubber has to meet the AI road, and where that happens — where real developers have to deal with real users and apply concrete ethical principles — is where you find some of the most interesting, practical thinking on the topic.

That’s why I wanted to speak with Wendy Foster, the Director of Engineering and Data Science at Shopify. Wendy’s approach to AI ethics is refreshingly concrete and actionable. And unlike more abstract approaches, it’s based on clear principles like user empowerment: the idea that you should avoid forcing users to make particular decisions, and instead design user interfaces that frame AI-recommended actions as suggestions that can be ignored or acted on.

Wendy joined me to discuss her practical perspective on AI ethics, the importance of user experience design for AI products, and how responsible AI gets baked into product at Shopify on this episode of the TDS podcast.

---

Intro music:

- Artist: Ron Gelinas

- Track Title: Daybreak Chill Blend (original mix)

- Link to Track: https://youtu.be/d8Y2sKIgFWc

---

Chapters

- 0:00 Intro

- 1:40 Wendy’s background

- 4:40 What does practice mean?

- 14:00 Different levels of explanation

- 19:05 Trusting the system

- 24:00 Training new folks

- 30:02 Company culture

- 34:10 The core of AI ethics

- 40:10 Communicating with the user

- 44:15 Wrap-up

Nov 10, 202144:37
101. Ayanna Howard - AI and the trust problem

101. Ayanna Howard - AI and the trust problem

Over the last two years, the capabilities of AI systems have exploded. AlphaFold2, MuZero, CLIP, DALLE, GPT-3 and many other models have extended the reach of AI to new problem classes. There’s a lot to be excited about.

But as we’ve seen in other episodes of the podcast, there’s a lot more to getting value from an AI system than jacking up its capabilities. And increasingly, one of these additional missing factors is becoming trust. You can make all the powerful AIs you want, but if no one trusts their output — or if people trust it when they shouldn’t — you can end up doing more harm than good.

That’s why we invited Ayanna Howard on the podcast. Ayanna is a roboticist, entrepreneur and Dean of the College of Engineering at Ohio State University, where she focuses her research on human-machine interactions and the factors that go into building human trust in AI systems. She joined me to talk about her research, its applications in medicine and education, and the future of human-machine trust.

---

Intro music:

- Artist: Ron Gelinas

- Track Title: Daybreak Chill Blend (original mix)

- Link to Track: https://youtu.be/d8Y2sKIgFWc

---

Chapters:

- 0:00 Intro

- 1:30 Ayanna’s background

- 6:10 The interpretability of neural networks

- 12:40 Domain of machine-human interaction

- 17:00 The issue of preference

- 20:50 Gelman/newspaper amnesia

- 26:35 Assessing a person’s persuadability

- 31:40 Doctors and new technology

- 36:00 Responsibility and accountability

- 43:15 The social pressure aspect

- 47:15 Is Ayanna optimistic?

- 53:00 Wrap-up

Nov 03, 202153:16
100. Max Jaderberg - Open-ended learning at DeepMind

100. Max Jaderberg - Open-ended learning at DeepMind

On the face of it, there’s no obvious limit to the reinforcement learning paradigm: you put an agent in an environment and reward it for taking good actions until it masters a task.

And by last year, RL had achieved some amazing things, including mastering Go, various Atari games, Starcraft II and so on. But the holy grail of AI isn’t to master specific games, but rather to generalize — to make agents that can perform well on new games that they haven’t been trained on before.

Fast forward to July of this year though and a team of DeepMind published a paper called “Open-Ended Learning Leads to Generally Capable Agents”, which takes a big step in the direction of general RL agents. Joining me for this episode of the podcast is one of the co-authors of that paper, Max Jaderberg. Max came into the Google ecosystem in 2014 when they acquired his computer vision company, and more recently, he started DeepMind’s open-ended learning team, which is focused on pushing machine learning further into the territory of cross-task generalization ability. I spoke to Max about open-ended learning, the path ahead for generalization and the future of AI.

---

Intro music by:

➞ Artist: Ron Gelinas

➞ Track Title: Daybreak Chill Blend (original mix)

➞ Link to Track: https://youtu.be/d8Y2sKIgFWc 

---

Chapters: 

- 0:00 Intro

- 1:30 Max’s background

- 6:40 Differences in procedural generations

- 12:20 The qualitative side

- 17:40 Agents’ mistakes

- 20:00 Measuring generalization

- 27:10 Environments and loss functions

- 32:50 The potential of symbolic logic

- 36:45 Two distinct learning processes

- 42:35 Forecasting research

- 45:00 Wrap-up

Oct 27, 202145:26
99. Margaret Mitchell - (Practical) AI ethics

99. Margaret Mitchell - (Practical) AI ethics

Oct 20, 202145:44
98. Mike Tung - Are knowledge graphs AI’s next big thing?

98. Mike Tung - Are knowledge graphs AI’s next big thing?

As impressive as they are, language models like GPT-3 and BERT all have the same problem: they’re trained on reams of internet data to imitate human writing. And human writing is often wrong, biased, or both, which means language models are trying to emulate an imperfect target.

Language models often babble, or make up answers to questions they don’t understand. And it can make them unreliable sources of truth. Which is why there’s been increased interest in alternative ways to retrieve information from large datasets — approaches that include knowledge graphs.

Knowledge graphs encode entities like people, places and objects into nodes, which are then connected to other entities via edges, which specify the nature of the relationship between the two. For example, a knowledge graph might contain a node for Mark Zuckerberg, linked to another node for Facebook, via an edge that indicates that Zuck is Facebook’s CEO. Both of these nodes might in turn be connected to dozens, or even thousands of others, depending on the scale of the graph.

Knowledge graphs are an exciting path ahead for AI capabilities, and the world’s largest knowledge graphs are trained by a company called Diffbot, whose CEO Mike Tung joined me for this episode of the podcast to discuss where knowledge graphs can improve on more standard techniques, and why they might be a big part of the future of AI.

---

Intro music by:

➞ Artist: Ron Gelinas

➞ Track Title: Daybreak Chill Blend (original mix)

➞ Link to Track: https://youtu.be/d8Y2sKIgFWc

--- 

0:00 Intro

1:30 The Diffbot dynamic

3:40 Knowledge graphs

7:50 Crawling the internet

17:15 What makes this time special?

24:40 Relation to neural networks

29:30 Failure modes

33:40 Sense of competition

39:00 Knowledge graphs for discovery

45:00 Consensus to find truth

48:15 Wrap-up

Oct 13, 202148:57
97. Anthony Habayeb - The present and future of AI regulation

97. Anthony Habayeb - The present and future of AI regulation

Corporate governance of AI doesn’t sound like a sexy topic, but it’s rapidly becoming one of the most important challenges for big companies that rely on machine learning models to deliver value for their customers. More and more, they’re expected to develop and implement governance strategies to reduce the incidence of bias, and increase the transparency of their AI systems and development processes. Those expectations have historically come from consumers, but governments are starting impose hard requirements, too.

So for today’s episode, I spoke to Anthony Habayeb, founder and CEO of Monitaur, a startup focused on helping businesses anticipate and comply with new and upcoming AI regulations and governance requirements. Anthony’s been watching the world of AI regulation very closely over the last several years, and was kind enough to share his insights on the current state of play and future direction of the field.

--- 

Intro music:

➞ Artist: Ron Gelinas

➞ Track Title: Daybreak Chill Blend (original mix)

➞ Link to Track: https://youtu.be/d8Y2sKIgFWc

--- 

Chapters:

- 0:00 Intro

- 1:45 Anthony’s background

- 6:20 Philosophies surrounding regulation

- 14:50 The role of governments

- 17:30 Understanding fairness

- 25:35 AI’s PR problem

- 35:20 Governments’ regulation

- 42:25 Useful techniques for data science teams

- 46:10 Future of AI governance

- 49:20 Wrap-up

Oct 06, 202149:31
96. Jan Leike - AI alignment at OpenAI

96. Jan Leike - AI alignment at OpenAI

The more powerful our AIs become, the more we’ll have to ensure that they’re doing exactly what we want. If we don’t, we risk building AIs that use dangerously creative solutions that have side-effects that could be undesirable, or downright dangerous. Even a slight misalignment between the motives of a sufficiently advanced AI and human values could be hazardous.

That’s why leading AI labs like OpenAI are already investing significant resources into AI alignment research. Understanding that research is important if you want to understand where advanced AI systems might be headed, and what challenges we might encounter as AI capabilities continue to grow — and that’s what this episode of the podcast is all about. My guest today is Jan Leike, head of AI alignment at OpenAI, and an alumnus of DeepMind and the Future of Humanity Institute. As someone who works directly with some of the world’s largest AI systems (including OpenAI’s GPT-3) Jan has a unique and interesting perspective to offer both on the current challenges facing alignment researchers, and the most promising future directions the field might take.

--- 

Intro music:

➞ Artist: Ron Gelinas

➞ Track Title: Daybreak Chill Blend (original mix)

➞ Link to Track: https://youtu.be/d8Y2sKIgFWc

--- 

Chapters:  

0:00 Intro

1:35 Jan’s background

7:10 Timing of scalable solutions

16:30 Recursive reward modeling

24:30 Amplification of misalignment

31:00 Community focus

32:55 Wireheading

41:30 Arguments against the democratization of AIs

49:30 Differences between capabilities and alignment

51:15 Research to focus on

1:01:45 Formalizing an understanding of personal experience

1:04:04 OpenAI hiring

1:05:02 Wrap-up

Sep 29, 202101:05:18
95. Francesca Rossi - Thinking, fast and slow: AI edition

95. Francesca Rossi - Thinking, fast and slow: AI edition

The recent success of large transformer models in AI raises new questions about the limits of current strategies: can we expect deep learning, reinforcement learning and other prosaic AI techniques to get us all the way to humanlike systems with general reasoning abilities?

Some think so, and others disagree. One dissenting voice belongs to Francesca Rossi, a former professor of computer science, and now AI Ethics Global Leader at IBM. Much of Francesca’s research is focused on deriving insights from human cognition that might help AI systems generalize better. Francesca joined me for this episode of the podcast to discuss her research, her thinking, and her thinking about thinking.

Sep 22, 202146:43
94. Divya Siddarth - Are we thinking about AI wrong?
Jul 28, 202101:02:44
93. 2021: A year in AI (so far) - Reviewing the biggest AI stories of 2021 with our friends at the Let’s Talk AI podcast

93. 2021: A year in AI (so far) - Reviewing the biggest AI stories of 2021 with our friends at the Let’s Talk AI podcast

2020 was an incredible year for AI. We saw powerful hints of the potential of large language models for the first time thanks to OpenAI’s GPT-3, DeepMind used AI to solve one of the greatest open problems in molecular biology, and Boston Dynamics demonstrated their ability to blend AI and robotics in dramatic fashion.

Progress in AI is accelerating exponentially, and though we’re just over halfway through 2021, this year is already turning into another one for the books. So we decided to partner with our friends over at Let’s Talk AI, a podcast co-hosted by Stanford PhD and former Googler Sharon Zhou, and Stanford PhD student Andrey Kurenkov, that covers current events in AI.

This was a fun chat, and a format we’ll definitely be playing with more in the future :)

Jul 21, 202143:32
92. Daniel Filan - Peering into neural nets for AI safety

92. Daniel Filan - Peering into neural nets for AI safety

Many AI researchers think it’s going to be hard to design AI systems that continue to remain safe as AI capabilities increase. We’ve seen already on the podcast that the field of AI alignment has emerged to tackle this problem, but a related effort is also being directed at a separate dimension of the safety problem: AI interpretability.

Our ability to interpret how AI systems process information and make decisions will likely become an important factor in assuring the reliability of AIs in the future. And my guest for this episode of the podcast has focused his research on exactly that topic. Daniel Filan is an AI safety researcher at Berkeley, where he’s supervised by AI pioneer Stuart Russell. Daniel also runs AXRP, a podcast dedicated to technical AI alignment research.

Jul 14, 202101:06:03
91. Peter Gao - Self-driving cars: Past, present and future
Jul 07, 202101:01:21
90. Jeffrey Ding - China’s AI ambitions and why they matter

90. Jeffrey Ding - China’s AI ambitions and why they matter

There are a lot of reasons to pay attention to China’s AI initiatives. Some are purely technological: Chinese companies are producing increasingly high-quality AI research, and they’re poised to become even more important players in AI over the next few years. For example, Huawei recently put together their own version of OpenAI’s massive GPT-3 language model — a feat that leveraged massive scale compute that pushed the limits of current systems, calling for deep engineering and technical know-how.

But China’s AI ambitions are also important geopolitically. In order to build powerful AI systems, you need a lot of compute power. And in order to get that, you need a lot of computer chips, which are notoriously hard to manufacture. But most of the world’s computer chips are currently made in democratic Taiwan, which China claims as its own territory. You can see how quickly this kind of thing can lead to international tension.

Still, the story of US-China AI isn’t just one of competition and decoupling, but also of cooperation — or at least, that’s the case made by my guest today, China AI expert and Stanford researcher Jeffrey Ding. In addition to studying Chinese AI ecosystem as part of his day job, Jeff published the very popular China AI newsletter, which offers a series of translations and analyses of Chinese language articles about AI. Jeff acknowledges the competitive dynamics of AI research, but argues that focusing only on controversial applications of AI — like facial recognition and military applications — causes us to ignore or downplay areas where real collaboration can happen, like language translation for example.

Jun 30, 202149:07
89. Pointing AI in the right direction - A cross-over episode with the Banana Data podcast!

89. Pointing AI in the right direction - A cross-over episode with the Banana Data podcast!

This special episode of the Towards Data Science podcast is a cross-over with our friends over at the Banana Data podcast. We’ll be zooming out and talking about some of the most important current challenges AI creates for humanity, and some of the likely future directions the technology might take.

Jun 23, 202136:36
88. Oren Etzioni - The case against (worrying about) existential risk from AI

88. Oren Etzioni - The case against (worrying about) existential risk from AI

Few would disagree that AI is set to become one of the most important economic and social forces in human history.

But along with its transformative potential has come concern about a strange new risk that AI might pose to human beings. As AI systems become exponentially more capable of achieving their goals, some worry that even a slight misalignment between those goals and our own could be disastrous. These concerns are shared by many of the most knowledgeable and experienced AI specialists, at leading labs like OpenAI, DeepMind, CHAI Berkeley, Oxford and elsewhere.

But they’re not universal: I recently had Melanie Mitchell — computer science professor and author who famously debated Stuart Russell on the topic of AI risk — on the podcast to discuss her objections to the AI catastrophe argument. And on this episode, we’ll continue our exploration of the case for AI catastrophic risk skepticism with an interview with Oren Etzioni, CEO of the Allen Institute for AI, a world-leading AI research lab that’s developed many well-known projects, including the popular AllenNLP library, and Semantic Scholar.

Oren has a unique perspective on AI risk, and the conversation was lots of fun!

Jun 16, 202153:34
87. Evan Hubinger - The Inner Alignment Problem

87. Evan Hubinger - The Inner Alignment Problem

How can you know that a super-intelligent AI is trying to do what you asked it to do?

The answer, it turns out, is: not easily. And unfortunately, an increasing number of AI safety researchers are warning that this is a problem we’re going to have to solve sooner rather than later, if we want to avoid bad outcomes — which may include a species-level catastrophe.

The type of failure mode whereby AIs optimize for things other than those we ask them to is known as an inner alignment failure in the context of AI safety. It’s distinct from outer alignment failure, which is what happens when you ask your AI to do something that turns out to be dangerous, and it was only recognized by AI safety researchers as its own category of risk in 2019. And the researcher who led that effort is my guest for this episode of the podcast, Evan Hubinger.

Evan is an AI safety veteran who’s done research at leading AI labs like OpenAI, and whose experience also includes stints at Google, Ripple and Yelp. He currently works at the Machine Intelligence Research Institute (MIRI) as a Research Fellow, and joined me to talk about his views on AI safety, the alignment problem, and whether humanity is likely to survive the advent of superintelligent AI.

Jun 09, 202101:09:32
86. Andy Jones - AI Safety and the Scaling Hypothesis

86. Andy Jones - AI Safety and the Scaling Hypothesis

When OpenAI announced the release of their  GPT-3 API last year, the tech world was shocked. Here was a language model, trained only to perform a simple autocomplete task, which turned out to be capable of language translation, coding, essay writing, question answering and many other tasks that previously would each have required purpose-built systems.

What accounted for GPT-3’s ability to solve these problems? How did it beat state-of-the-art AIs that were purpose-built to solve tasks it was never explicitly trained for? Was it a brilliant new algorithm? Something deeper than deep learning?

Well… no. As algorithms go, GPT-3 was relatively simple, and was built using a by-then fairly standard transformer architecture. Instead of a fancy algorithm, the real difference between GPT-3 and everything that came before was size: GPT-3 is a simple-but-massive, 175B-parameter model, about 10X bigger than the next largest AI system.

GPT-3 is only the latest in a long line of results that now show that scaling up simple AI techniques can give rise to new behavior, and far greater capabilities. Together, these results have motivated a push toward AI scaling: the pursuit of ever larger AIs, trained with more compute on bigger datasets. But scaling is expensive: by some estimates, GPT-3 cost as much as $5M to train. As a result, only well-resources companies like Google, OpenAI and Microsoft have been able to experiment with scaled models.

That’s a problem for independent AI safety researchers, who want to better understand how advanced AI systems work, and what their most dangerous behaviors might be, but who can’t afford a $5M compute budget. That’s why a recent paper by Andy Jones, an independent researcher specialized in AI scaling, is so promising: Andy’s paper shows that, at least in some contexts, the capabilities of large AI systems can be predicted from those of smaller ones. If the result generalizes, it could give independent researchers the ability to run cheap experiments on small systems, which nonetheless generalize to expensive, scaled AIs like GPT-3. Andy was kind enough to join me for this episode of the podcast.

Jun 02, 202101:25:45
85. Brian Christian - The Alignment Problem

85. Brian Christian - The Alignment Problem

In 2016, OpenAI published a blog describing the results of one of their AI safety experiments. In it, they describe how an AI that was trained to maximize its score in a boat racing game ended up discovering a strange hack: rather than completing the race circuit as fast as it could, the AI learned that it could rack up an essentially unlimited number of bonus points by looping around a series of targets, in a process that required it to ram into obstacles, and even travel in the wrong direction through parts of the circuit.

This is a great example of the alignment problem: if we’re not extremely careful, we risk training AIs that find dangerously creative ways to optimize whatever thing we tell them to optimize for. So building safe AIs — AIs that are aligned with our values — involves finding ways to very clearly and correctly quantify what we want our AIs to do. That may sound like a simple task, but it isn’t: humans have struggled for centuries to define “good” metrics for things like economic health or human flourishing, with very little success.

Today’s episode of the podcast features Brian Christian — the bestselling author of several books related to the connection between humanity and computer science & AI. His most recent book, The Alignment Problem, explores the history of alignment research, and the technical and philosophical questions that we’ll have to answer if we’re ever going to safely outsource our reasoning to machines. Brian’s perspective on the alignment problem links together many of the themes we’ve explored on the podcast so far, from AI bias and ethics to existential risk from AI.

May 26, 202101:06:20
84. Eliano Marques - The (evolving) world of AI privacy and data security

84. Eliano Marques - The (evolving) world of AI privacy and data security

We all value privacy, but most of us would struggle to define it. And there’s a good reason for that: the way we think about privacy is shaped by the technology we use. As new technologies emerge, which allow us to trade data for services, or pay for privacy in different forms, our expectations shift and privacy standards evolve. That shifting landscape makes privacy a moving target.

The challenge of understanding and enforcing privacy standards isn’t novel, but it’s taken on a new importance given the rapid progress of AI in recent years. Data that would have been useless just a decade ago — unstructured text data and many types of images come to mind — are now a treasure trove of value, for example. Should companies have the right to use data they originally collected at a time when its value was limited, when it no longer is? Do companies have an obligation to provide maximum privacy without charging their customers directly for it? Privacy in AI is as much a philosophical question as a technical one, and to discuss it, I was joined by Eliano Marques, Executive VP of Data and AI at Protegrity, a company that specializes in privacy and data protection for large companies. Eliano has worked in data privacy for the last decade.

May 19, 202153:38
83. Rosie Campbell - Should all AI research be published?

83. Rosie Campbell - Should all AI research be published?

When OpenAI developed its GPT-2 language model in early 2019, they initially chose not to publish the algorithm, owing to concerns over its potential for malicious use, as well as the need for the AI industry to experiment with new, more responsible publication practices that reflect the increasing power of modern AI systems.

This decision was controversial, and remains that way to some extent even today: AI researchers have historically enjoyed a culture of open publication and have defaulted to sharing their results and algorithms. But whatever your position may be on algorithms like GPT-2, it’s clear that at some point, if AI becomes arbitrarily flexible and powerful, there will be contexts in which limits on publication will be important for public safety.

The issue of publication norms in AI is complex, which is why it’s a topic worth exploring with people who have experience both as researchers, and as policy specialists — people like today’s Towards Data Science podcast guest, Rosie Campbell. Rosie is the Head of Safety Critical AI at Partnership on AI (PAI), a nonprofit that brings together startups, governments, and big tech companies like Google, Facebook, Microsoft and Amazon, to shape best practices, research, and public dialogue about AI’s benefits for people and society. Along with colleagues at PAI, Rosie recently finished putting together a white paper exploring the current hot debate over publication norms in AI research, and making recommendations for researchers, journals and institutions involved in AI research.

May 12, 202152:38
82. Jakob Foerster - The high cost of automated weapons

82. Jakob Foerster - The high cost of automated weapons

Automated weapons mean fewer casualties, faster reaction times, and more precise strikes. They’re a clear win for any country that deploys them. You can see the appeal.

But they’re also a classic prisoner’s dilemma. Once many nations have deployed them, humans no longer have to be persuaded to march into combat, and the barrier to starting a conflict drops significantly.

The real risks that come from automated weapons systems like drones aren’t always the obvious ones. Many of them take the form of second-order effects — the knock-on consequences that come from setting up a world where multiple countries have large automated forces. But what can we do about them? That’s the question we’ll be taking on during this episode of the podcast with Jakob Foerster, an early pioneer in multi-agent reinforcement learning, and incoming faculty member at the University of Toronto. Jakob has been involved in the debate over weaponized drone automation for some time, and recently wrote an open letter to German politicians urging them to consider the risks associated with the deployment of this technology.

May 05, 202154:08
81. Nicolas Miailhe - AI risk is a global problem

81. Nicolas Miailhe - AI risk is a global problem

In December 1938, a frustrated nuclear physicist named Leo Szilard wrote a letter to the British Admiralty telling them that he had given up on his greatest invention — the nuclear chain reaction.

"The idea of a nuclear chain reaction won’t work. There’s no need to keep this patent secret, and indeed there’s no need to keep this patent too. It won’t work." — Leo Szilard

What Szilard didn’t know when he licked the envelope was that, on that very same day, a research team in Berlin had just split the uranium atom for the very first time. Within a year, the Manhatta Project would begin, and by 1945, the first atomic bomb was dropped on the Japanese city of Hiroshima. It was only four years later — barely a decade after Szilard had written off the idea as impossible — that Russia successfully tested its first atomic weapon, kicking off a global nuclear arms race that continues in various forms to this day.

It’s a surprisingly short jump from cutting edge technology to global-scale risk. But although the nuclear story is a high-profile example of this kind of leap, it’s far from the only one. Today, many see artificial intelligence as a class of technology whose development will lead to global risks — and as a result, as a technology that needs to be managed globally. In much the same way that international treaties have allowed us to reduce the risk of nuclear war, we may need global coordination around AI to mitigate its potential negative impacts.

One of the world’s leading experts on AI’s global coordination problem is Nicolas Miailhe. Nicolas is the co-founder of The Future Society, a global nonprofit whose primary focus is encouraging responsible adoption of AI, and ensuring that countries around the world come to a common understanding of the risks associated with it. Nicolas is a veteran of the prestigious Harvard Kennedy School of Government, an appointed expert to the Global Partnership on AI, and advises cities, governments, international organizations about AI policy.

Apr 28, 202156:04