エピソード

  • Highlights: #217 – Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress
    2025/06/26

    AI models today have a 50% chance of successfully completing a task that would take an expert human one hour. Seven months ago, that number was roughly 30 minutes — and seven months before that, 15 minutes.

    These are substantial, multi-step tasks requiring sustained focus: building web applications, conducting machine learning research, or solving complex programming challenges.

    Today’s guest, Beth Barnes, is CEO of METR (Model Evaluation & Threat Research) — the leading organisation measuring these capabilities.

    These highlights are from episode #217 of The 80,000 Hours Podcast: Beth Barnes on the most important graph in AI right now — and the 7-month rule that governs its progress, and include:

    • Can we see AI scheming in the chain of thought? (00:00:34)
    • We have to test model honesty even before they're used inside AI companies (00:05:48)
    • It's essential to thoroughly test relevant real-world tasks (00:10:13)
    • Recursively self-improving AI might even be here in two years — which is alarming (00:16:09)
    • Do we need external auditors doing AI safety tests, not just the companies themselves? (00:21:55)
    • A case against safety-focused people working at frontier AI companies (00:29:30)
    • Open-weighting models is often good, and Beth has changed her attitude about it (00:34:57)

    These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!

    And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.

    Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong

    続きを読む 一部表示
    41 分
  • Highlights: #216 – Ian Dunt on why governments in Britain and elsewhere can’t get anything done – and how to fix it
    2025/05/27

    When you have a system where ministers almost never understand their portfolios, civil servants change jobs every few months, and MPs don’t grasp parliamentary procedure even after decades in office — is the problem the people, or the structure they work in?

    Political journalist Ian Dunt studies the systemic reasons governments succeed and fail. And in his book How Westminster Works …and Why It Doesn’t, he argues that Britain’s government dysfunction and multi-decade failure to solve its key problems stems primarily from bad incentives and bad processes.

    These highlights are from episode #216 of The 80,000 Hours Podcast: Ian Dunt on why governments in Britain and elsewhere can’t get anything done – and how to fix it, and include:

    • Rob's intro (00:00:00)
    • The UK is governed from a tiny cramped house (00:00:08)
    • Replacing political distractions with departmental organisation (00:02:58)
    • The profoundly dangerous development of "delegated legislation" (00:06:42)
    • Do more independent-minded legislatures actually lead to better outcomes? (00:09:08)
    • MPs waste much of their time helping constituents with random complaints (00:12:50)
    • How to keep expert civil servants (00:15:44)
    • Unlikely heroes in the House of Lords (00:18:33)
    • Proportional representation and other alternatives to first-past-the-post (00:22:02)

    These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!

    And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.

    Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong

    続きを読む 一部表示
    31 分
  • Highlights: #215 – Tom Davidson on how AI-enabled coups could allow a tiny group to seize power
    2025/05/16

    Throughout history, technological revolutions have fundamentally shifted the balance of power in society. The Industrial Revolution created conditions where democracies could dominate for the first time — as nations needed educated, informed, and empowered citizens to deploy advanced technologies and remain competitive.

    Unfortunately, there’s every reason to think artificial general intelligence (AGI) will reverse that trend.

    In a new paper, Tom Davidson — senior research fellow at the Forethought Centre for AI Strategy — argues that advanced AI systems will enable unprecedented power grabs by tiny groups of people, primarily by removing the need for other human beings to participate.


    These highlights are from episode #215 of The 80,000 Hours Podcast: Tom Davidson on how AI-enabled coups could allow a tiny group to seize power, and include:

    • "No person rules alone" — except now they might (00:00:13)
    • The 3 threat scenarios (00:06:17)
    • Underpinning all 3 threats: Secret AI loyalties (00:10:15)
    • Is this common sense or far-fetched? (00:13:46)
    • How to automate a military coup (00:17:41)
    • If you took over the US, could you take over the whole world? (00:22:44)
    • Secret loyalties all the way down (00:26:27)
    • Is it important to have more than one powerful AI country? (00:29:59)
    • What transparency actually looks like (00:33:08)

    These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!

    And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.

    Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong

    続きを読む 一部表示
    37 分
  • Highlights: #214 – Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway
    2025/04/18

    Most AI safety conversations centre on alignment: ensuring AI systems share our values and goals. But despite progress, we’re unlikely to know we’ve solved the problem before the arrival of human-level and superhuman systems in as little as three years.

    So some — including Buck Shlegeris, CEO of Redwood Research — are developing a backup plan to safely deploy models we fear are actively scheming to harm us: so-called “AI control.” While this may sound mad, given the reluctance of AI companies to delay deploying anything they train, not developing such techniques is probably even crazier.

    These highlights are from episode #214 of The 80,000 Hours Podcast: Buck Shlegeris on controlling AI that wants to take over – so we can use it anyway, and include:

    • What is AI control? (00:00:15)
    • One way to catch AIs that are up to no good (00:07:00)
    • What do we do once we catch a model trying to escape? (00:13:39)
    • Team Human vs Team AI (00:18:24)
    • If an AI escapes, is it likely to be able to beat humanity from there? (00:24:59)
    • Is alignment still useful? (00:32:10)
    • Could 10 safety-focused people in an AGI company do anything useful? (00:35:34)

    These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!

    And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.

    Highlights put together by Ben Cordell, Milo McGuire, and Dominic Armstrong

    続きを読む 一部表示
    41 分
  • Off the Clock #8: Leaving Las London with Matt Reardon
    2025/04/01

    Watch this episode on YouTube! https://youtu.be/fJssGodnCQg

    Conor and Arden sit down with Matt in his farewell episode to discuss the law, their team retreat, his lessons learned from 80k, and the fate of the show.

    続きを読む 一部表示
    1 時間 43 分
  • Highlights: #213 – Will MacAskill on AI causing a “century in a decade” — and how we’re completely unprepared
    2025/03/25

    The 20th century saw unprecedented change: nuclear weapons, satellites, the rise and fall of communism, third-wave feminism, the internet, postmodernism, game theory, genetic engineering, the Big Bang theory, quantum mechanics, birth control, and more. Now imagine all of it compressed into just 10 years.

    That’s the future Will MacAskill — philosopher and researcher at the Forethought Centre for AI Strategy — argues we need to prepare for in his new paper “Preparing for the intelligence explosion.” Not in the distant future, but probably in three to seven years.

    These highlights are from episode #213 of The 80,000 Hours Podcast: Will MacAskill on AI causing a “century in a decade” — and how we’re completely unprepared, and include:

    • Rob's intro (00:00:00)
    • A century of history crammed into a decade (00:00:17)
    • What does a good future with AGI even look like? (00:04:48)
    • AI takeover might happen anyway — should we rush to load in our values? (00:09:29)
    • Lock-in is plausible where it never was before (00:14:40)
    • ML researchers are feverishly working to destroy their own power (00:20:07)
    • People distrust utopianism for good reason (00:24:30)
    • Non-technological disruption (00:29:18)
    • The 3 intelligence explosions (00:31:10)

    These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!

    And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.

    Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

    続きを読む 一部表示
    34 分
  • Highlights: #212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway
    2025/03/12

    Technology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through. That’s how Allan Dafoe — director of frontier safety and governance at Google DeepMind — explains one of the deepest patterns in technological history: once a powerful new capability becomes available, societies that adopt it tend to outcompete those that don’t. Those who resist too much can find themselves taken over or rendered irrelevant.


    These highlights are from episode #212 of The 80,000 Hours Podcast: Allan Dafoe on why technology is unstoppable & how to shape AI development anyway, and include:

    • Who's Allan Dafoe? (00:00:00)
    • Astounding patterns in macrohistory (00:00:23)
    • Are humans just along for the ride when it comes to technological progress? (00:03:58)
    • Flavours of technological determinism (00:07:11)
    • The super-cooperative AGI hypothesis and backdoors (00:12:50)
    • Could having more cooperative AIs backfire? (00:19:16)
    • The offence-defence balance (00:24:23)

    These aren't necessarily the most important or even most entertaining parts of the interview — so if you enjoy this, we strongly recommend checking out the full episode!

    And if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.

    Highlights put together by Simon Monsour, Milo McGuire, and Dominic Armstrong

    続きを読む 一部表示
    29 分
  • Off the Clock #7: Getting on the Crazy Train with Chi Nguyen
    2025/01/13

    Watch this episode on YouTube! https://youtu.be/IRRwHCK279E

    Matt, Bella, and Huon sit down with Chi Nguyen to discuss cooperating with aliens, elections of future past, and Bad Billionaires pt. 2.

    Check out:

    • Matt’s summer appearance on the BBC on funding for the arts
    • Chi’s ECL Explainer (get in touch to support!)
    続きを読む 一部表示
    1 時間 24 分