『Warning Shots』のカバーアート

Warning Shots

Warning Shots

著者: The AI Risk Network
無料で聴く

このコンテンツについて

An urgent weekly recap of AI risk news, hosted by John Sherman, Liron Shapira, and Michael Zafiris.

theairisknetwork.substack.comThe AI Risk Network
政治・政府
エピソード
  • The AI That Doesn’t Want to Die: Why Self-Preservation Is Built Into Intelligence | Warning Shots #16
    2025/11/02

    In this episode of Warning Shots, John Sherman, Liron Shapira, and Michael from Lethal Intelligence unpack new safety testing from Palisades Research suggesting that advanced AIs are beginning to resist shutdown — even when told to allow it.

    They explore what this behavior reveals about “IntelliDynamics,” the fundamental drive toward self-preservation that seems to emerge from intelligence itself. Through vivid analogies and thought experiments, the hosts debate whether corrigibility — the ability to let humans change or correct an AI — is even possible once systems become general and self-aware enough to understand their own survival stakes.

    Along the way, they tackle:

    * Why every intelligent system learns “don’t let them turn me off.”

    * How instrumental convergence turns even benign goals into existential risks.

    * Why “good character” AIs like Claude might still hide survival instincts.

    * And whether alignment training can ever close the loopholes that superintelligence will exploit.

    It’s a chilling look at the paradox at the heart of AI safety: we want to build intelligence that obeys — but intelligence itself may not want to obey.

    🌎 www.guardrailnow.org

    👥 Follow our Guests:

    🔥Liron Shapira —@DoomDebates

    🔎 Michael — @lethal-intelligence ​



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    続きを読む 一部表示
    23 分
  • The Letter That Could Rewrite the Future of AI | Warning Shots #15
    2025/10/26

    This week on Warning Shots, John Sherman, Liron Shapira, and Michael from Lethal Intelligence break down the Future of Life Institute’s explosive new “Superintelligence Statement” — a direct call to ban the development of superintelligence until there’s scientific proof and public consent that it can be done safely.

    They trace the evolution from the 2023 Center for AI Safety statement (“Mitigating the risk of extinction from AI…”) to today’s far bolder demand: “Don’t build superintelligence until we’re sure it won’t destroy us.”

    Together, they unpack:

    * Why “ban superintelligence” could become the new rallying cry for AI safety

    * How public opinion is shifting toward regulation and restraint

    * The fierce backlash from policymakers like Dean Ball — and what it exposes

    * Whether statements and signatures can turn into real political change

    This episode captures a turning point: the moment when AI safety moves from experts to the people.

    If it’s Sunday, it’s Warning Shots.

    ⚠️ Subscribe to Warning Shots for weekly breakdowns of the world’s most alarming AI confessions — from the people making the future, and possibly ending it.

    🌎 www.guardrailnow.org

    👥 Follow our Guests:

    🔥 Liron Shapira — @DoomDebates

    🔎 Michael — @lethal-intelligence ​



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    続きを読む 一部表示
    28 分
  • AI Leaders Admit: We Can’t Stop the Monster We’re Creating | Warning Shots Ep. 14
    2025/10/19

    This week on Warning Shots, Jon Sherman, Liron Shapira, and Michael from Lethal Intelligence dissect a chilling pattern emerging among AI leaders: open admissions that they’re creating something they can’t control.

    Anthropic co-founder Jack Clark compares his company’s AI to “a mysterious creature,” admitting he’s deeply afraid yet unable to stop. Elon Musk, meanwhile, shrugs off responsibility — saying he’s “warned the world” and can only make his own version of AI “less woke.”

    The hosts unpack the contradictions, incentives, and moral fog surrounding AI development:

    * Why safety-conscious researchers still push forward

    * Whether “regulatory capture” explains the industry’s safety theater

    * How economic power and ego drive the race toward AGI

    * Why even insiders joke about “30% extinction risk” like it’s normal

    As Jon says, “Don’t believe us — listen to them. The builders are indicting themselves.”

    ⚠️ Subscribe to Warning Shots for weekly breakdowns of the world’s most alarming AI confessions — from the people making the future, and possibly ending it.

    🌎 guardrailnow.org

    👥 Follow our Guests:

    💡 Liron Shapira — @DoomDebates

    🔎 Michael — @Lethal-Intelligence



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    続きを読む 一部表示
    21 分
まだレビューはありません