
111. Mo Gawdat - Scary Smart: A former Google exec’s perspective on AI risk
Towards Data Science
01/26/22
•60m
About
Comments
Featured In
If you were scrolling through your newsfeed in late September 2021, you may have caught this splashy headline from The Times of London that read, “Can this man save the world from artificial intelligence?”. The man in question was Mo Gawdat, an entrepreneur and senior tech executive who spent several years as the Chief Business Officer at GoogleX (now called X Development), Google’s semi-secret research facility, that experiments with moonshot projects like self-driving cars, flying vehicles, and geothermal energy. At X, Mo was exposed to the absolute cutting edge of many fields — one of which was AI. His experience seeing AI systems learn and interact with the world raised red flags for him — hints of the potentially disastrous failure modes of the AI systems we might just end up with if we don’t get our act together now.
Mo writes about his experience as an insider at one of the world’s most secretive research labs and how it led him to worry about AI risk, but also about AI’s promise and potential in his new book, Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World. He joined me to talk about just that on this episode of the TDS podcast.
Previous Episode

110. Alex Turner - Will powerful AIs tend to seek power?
January 19, 2022
•46m
Today’s episode is somewhat special, because we’re going to be talking about what might be the first solid quantitative study of the power-seeking tendencies that we can expect advanced AI systems to have in the future.
For a long time, there’s kind of been this debate in the AI safety world, between:
- People who worry that powerful AIs could eventually displace, or even eliminate humanity altogether as they find more clever, creative and dangerous ways to optimize their reward metrics on the one hand, and
- People who say that’s Terminator-bating Hollywood nonsense that anthropomorphizes machines in a way that’s unhelpful and misleading.
Unfortunately, recent work in AI alignment — and in particular, a spotlighted 2021 NeurIPS paper — suggests that the AI takeover argument might be stronger than many had realized. In fact, it’s starting to look like we ought to expect to see power-seeking behaviours from highly capable AI systems by default. These behaviours include things like AI systems preventing us from shutting them down, repurposing resources in pathological ways to serve their objectives, and even in the limit, generating catastrophes that would put humanity at risk.
As concerning as these possibilities might be, it’s exciting that we’re starting to develop a more robust and quantitative language to describe AI failures and power-seeking. That’s why I was so excited to sit down with AI researcher Alex Turner, the author of the spotlighted NeurIPS paper on power-seeking, and discuss his path into AI safety, his research agenda and his perspective on the future of AI on this episode of the TDS podcast.
***
Intro music:
➞ Artist: Ron Gelinas
➞ Track Title: Daybreak Chill Blend (original mix)
➞ Link to Track: https://youtu.be/d8Y2sKIgFWc
***
Chapters:
2:05 Interest in alignment research
8:00 Two camps of alignment research
13:10 The NeurIPS paper
17:10 Optimal policies
25:00 Two-piece argument
28:30 Relaxing certain assumptions
32:45 Objections to the paper
39:00 Broader sense of optimization
46:35 Wrap-up
Next Episode

Until very recently, the study of human disease involved looking at big things — like organs or macroscopic systems — and figuring out when and how they can stop working properly. But that’s all started to change: in recent decades, new techniques have allowed us to look at disease in a much more detailed way, by examining the behaviour and characteristics of single cells.
One class of those techniques now known as single-cell genomics — the study of gene expression and function at the level of single cells. Single-cell genomics is creating new, high-dimensional datasets consisting of tens of millions of cells whose gene expression profiles and other characteristics have been painstakingly measured. And these datasets are opening up exciting new opportunities for AI-powered drug discovery — opportunities that startups are now starting to tackle head-on.
Joining me for today’s episode is Tali Raveh, Senior Director of Computational Biology at Immunai, a startup that’s using single-cell level data to perform high resolution profiling of the immune system at industrial scale. Tali joined me to talk about what makes the immune system such an exciting frontier for modern medicine, and how single-cell data and AI might be poised to generate unprecedented breakthroughs in disease treatment on this episode of the TDS podcast.
---
Intro music:
➞ Artist: Ron Gelinas
➞ Track Title: Daybreak Chill Blend (original mix)
➞ Link to Track: https://youtu.be/d8Y2sKIgFWc
---
Chapters:
0:00 Intro
2:00 Tali’s background
4:00 Immune systems and modern medicine
14:40 Data collection technology
19:00 Exposing cells to different drugs
24:00 Labeled and unlabelled data
27:30 Dataset status
31:30 Recent algorithmic advances
36:00 Cancer and immunology
40:00 The next few years
41:30 Wrap-up
If you like this episode you’ll love
Promoted




