Log in

Towards Data Science - 130. Edouard Harris - New Research: Advanced AI may tend to seek power *by default*
share icon

130. Edouard Harris - New Research: Advanced AI may tend to seek power *by default*

Towards Data Science

10/12/22

58m

About

Comments

Featured In

Progress in AI has been accelerating dramatically in recent years, and even months. It seems like every other day, there’s a new, previously-believed-to-be-impossible feat of AI that’s achieved by a world-leading lab. And increasingly, these breakthroughs have been driven by the same, simple idea: AI scaling.

For those who haven’t been following the AI scaling sage, scaling means training AI systems with larger models, using increasingly absurd quantities of data and processing power. So far, empirical studies by the world’s top AI labs seem to suggest that scaling is an open-ended process that can lead to more and more capable and intelligent systems, with no clear limit.

And that’s led many people to speculate that scaling might usher in a new era of broadly human-level or even superhuman AI — the holy grail AI researchers have been after for decades.

And while that might sound cool, an AI that can solve general reasoning problems as well as or better than a human might actually be an intrinsically dangerous thing to build.

At least, that’s the conclusion that many AI safety researchers have come to following the publication of a new line of research that explores how modern AI systems tend to solve problems, and whether we should expect more advanced versions of them to perform dangerous behaviours like seeking power.

This line of research in AI safety is called “power-seeking”, and although it’s currently not well understood outside the frontier of AI safety and AI alignment research, it’s starting to draw a lot of attention. The first major theoretical study of power seeking was led by Alex Turner, who’s appeared on the podcast before, and was published in NeurIPS (the world’s top AI conference), for example.

And today, we’ll be hearing from Edouard Harris, an AI alignment researcher and one of my co-founders in the AI safety company (Gladstone AI). Ed’s just completed a significant piece of AI safety research that extends Alex Turner’s original power-seeking work, and that shows what seems to be the first experimental evidence suggesting that we should expect highly advanced AI systems to seek power by default.

What does power seeking really mean though? And does all this imply for the safety of future, general-purpose reasoning systems? That’s what this episode will be all about.

***

Intro music:

Artist: Ron Gelinas

Track Title: Daybreak Chill Blend (original mix)

Link to Track: https://youtu.be/d8Y2sKIgFWc

***

Chapters:

0:00 Intro

4:00 Alex Turner's research

7:45 What technology wants

11:30 Universal goals

17:30 Connecting observations

24:00 Micro power seeking behaviour

28:15 Ed's research

38:00 The human as the environment

42:30 What leads to power seeking

48:00 Competition as a default outcome

52:45 General concern

57:30 Wrap-up

Previous Episode

It’s no secret that a new generation of powerful and highly scaled language models is taking the world by storm. Companies like OpenAI, AI21Labs, and Cohere have built models so versatile that they’re powering hundreds of new applications, and unlocking entire new markets for AI-generated text.

In light of that, I thought it would be worth exploring the applied side of language modelling — to dive deep into one specific language model-powered tool, to understand what it means to build apps on top of scaled AI systems. How easily can these models be used in the wild? What bottlenecks and challenges do people run into when they try to build apps powered by large language models? That’s what I wanted to find out.

My guest today is Amber Teng, and she’s a data scientist who recently published a blog that got quite a bit of attention, about a resume cover letter generator that she created using GPT-3, OpenAI’s powerful and now-famous language model. I thought her project would be make for a great episode, because it exposes so many of the challenges and opportunities that come with the new era of powerful language models that we’ve just entered.

So today we’ll be exploring exactly that: looking at the applied side of language modelling and prompt engineering, understanding how large language models have made new apps not only possible but also much easier to build, and the likely future of AI-powered products.

***

Intro music:

Artist: Ron Gelinas

Track Title: Daybreak Chill Blend (original mix)

Link to Track: https://youtu.be/d8Y2sKIgFWc

***

Chapters:

0:00 Intro

2:30 Amber’s background

5:30 Using GPT-3

14:45 Building prompts up

18:15 Prompting best practices

21:45 GPT-3 mistakes

25:30 Context windows

30:00 End-to-end time

34:45 The cost of one cover letter

37:00 The analytics

41:45 Dynamics around company-building

46:00 Commoditization of language modelling

51:00 Wrap-up

Next Episode

On the last episode of the Towards Data Science Podcast, host Jeremie Harris offers his perspective on the last two years of AI progress, and what he thinks it means for everything, from AI safety to the future of humanity. Going forward, Jeremie will be exploring these topics on the new Gladstone AI podcast.

***

Intro music:

Artist: Ron Gelinas

Track Title: Daybreak Chill Blend (original mix)

Link to Track: https://youtu.be/d8Y2sKIgFWc

Chapters:
  • 0:00 Intro
  • 6:00 The Bitter Lesson
  • 10:00 The introduction of GPT-3
  • 16:45 AI catastrophic risk (paper clip example)
  • 23:00 Reward hacking
  • 27:30 Approaching intelligence
  • 32:00 Wrap-up

Links

The new Gladstone AI podcast, where I’ll be talking about one new, cutting-edge AI model each week in plain English (its use cases, its potential malicious applications, and its relevance to AI alignment risk).

80,000 Hours: a website where you can get advice on how to contribute to solving AI safety and AI policy problems.

Concrete Problems in AI Safety: an oldie but a goodie, that introduces a lot of the central problems in AI alignment that remain open to this day.

Promoted