『For Humanity: An AI Safety Podcast』のカバーアート

For Humanity: An AI Safety Podcast

For Humanity: An AI Safety Podcast

著者: The AI Risk Network
無料で聴く

このコンテンツについて

For Humanity, An AI Safety Podcast is the the AI Safety Podcast for regular people. Peabody, duPont-Columbia and multi-Emmy Award-winning former journalist John Sherman explores the shocking worst-case scenario of artificial intelligence: human extinction. The makers of AI openly admit it their work could kill all humans, in as soon as 2-10 years. This podcast is solely about the threat of human extinction from AGI. We’ll name and meet the heroes and villains, explore the issues and ideas, and what you can do to help save humanity.

theairisknetwork.substack.comThe AI Risk Network
エピソード
  • Big Tech Under Pressure: Hunger Strikes and the Fight for AI Safety | For Humanity EP69
    2025/09/10

    Get 40% off Ground News’ unlimited access Vantage Plan at https://ground.news/airisk for only $5/month, explore how stories are framed worldwide and across the political spectrum.

    TAKE ACTION TO DEMAND AI SAFETY LAWS: https://safe.ai/act

    In Episode 69 of For Humanity: An AI Risk Podcast, we explore one of the most striking acts of activism in the AI debate: hunger strikes aimed at pushing Big Tech to prioritize safety over speed.

    Michael and Dennis, two AI safety advocates, join John from outside DeepMind’s London headquarters, where they are staging hunger strikes to demand that frontier AI development be paused. Inspired by Guido’s protest in San Francisco, they are risking their health to push tech leaders like Demis Hassabis to make public commitments to slow down the AI race.

    This episode looks at how ordinary people are taking extraordinary steps to demand accountability, why this form of protest is gaining attention, and what history tells us about the power of public pressure.

    In this conversation, you’ll discover:

    * Why hunger strikers believe urgent action on AI safety is necessary

    * How Big Tech companies are responding to growing public concern

    * The role of parents, workers, and communities in shaping AI policy

    * Parallels with past social movements that drove real change

    * Practical ways you can make your voice heard in the AI safety conversation

    This isn’t just about technology—it’s about responsibility, leadership, and the choices we make for future generations.

    🔗 Key Links

    👉 AI Pause Petition: https://safe.ai/act

    👉 Follow the movement on X: https://x.com/safeai

    👉 Learn more and get involved: GuardRailNow.org



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    続きを読む 一部表示
    58 分
  • Forcing Sunlight Into OpenAI | For Humanity: An AI Risk Podcast | EP68
    2025/08/13

    Get 40% off Ground News’ unlimited access Vantage Plan at https://ground.news/airisk for only $5/month, explore how stories are framed worldwide and across the political spectrum.TAKE ACTION TO DEMAND AI SAFETY LAWS: https://safe.ai/actTyler Johnston, Executive Director of The Midas Project, joins John to break down the brand-new open letter demanding that OpenAI answer seven specific questions about its proposed corporate restructuring. The letter, published on 4 August 2025 and coordinated by the Midas Project, already carries the signatures of more than 100 Nobel laureates, technologists, legal scholars, and public figures. What we coverWhy transparency matters now: OpenAI is “making a deal on humanity’s behalf without allowing us to see the contract.” themidasproject.comThe Seven Questions the letter poses—ranging from whether OpenAI will still prioritize its nonprofit mission over profit to whether it will reveal the new operating agreement that governs AGI deployment. openai-transparency.orgthemidasproject.comWho’s on board: Signatories include Geoffrey Hinton, Vitalik Buterin, Lawrence Lessig, and Stephen Fry, underscoring broad concern across science, tech, and public life. themidasproject.comNext steps: How you can read the full letter, add your name, and help keep the pressure on for accountability.🔗 Key LinksRead / Sign the Open Letter: https://www.openai-transparency.org/The Midas Project (official site): https://www.themidasproject.com/Follow The Midas Project on X: https://x.com/TheMidasProj👉 Subscribe for weekly AI-risk conversations → http://bit.ly/ForHumanityYT👍 Like • Comment • Share — because transparency only happens when we demand it.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    続きを読む 一部表示
    54 分
  • Right Wing AI Risk Alarm | For Humanity | EP67
    2025/07/24

    🚨 RIGHT‑WING AI ALARM | For Humanity #67

    Steve Bannon, Tucker Carlson, and other conservative voices

    are sounding fresh warnings on AI extinction risk. John breaks

    down what’s real, what’s hype, and why this moment matters.


    ⏰ WHAT’S INSIDE

    • The ideological shift that’s bringing the right into the AI‑safety fight

    • New bills on the Hill that could shape model licensing & oversight

    • Action steps for parents, policymakers, and technologists

    • A first look at the AI Risk Network — five shows, one mission:

    get the public ready for advanced AI


    🔗 TAKE ACTION & LEARN MORE

    Alliance for Secure AI

    Website ▸ https://secureainow.org

    X / Twitter ▸ https://x.com/secureainow


    AI Policy Network

    Website ▸ https://theaipn.org

    LinkedIn ▸ https://www.linkedin.com/company/theaipn


    📡 JOIN THE NEW **AI RISK NETWORK**

    Subscribe here ➜ [insert channel URL]

    Turn on alerts so you never miss an episode, short, or live Q&A.


    👍 If you learned something, hit Like, drop a comment, and share

    this link with one person who should be watching. Every click helps

    wake up the world to AI risk.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    続きを読む 一部表示
    1 時間 16 分
まだレビューはありません