『Am I?』のカバーアート

Am I?

Am I?

著者: The AI Risk Network
無料で聴く

このコンテンツについて

The AI consciousness podcast, hosted by AI safety researcher Cameron Berg and philosopher Milo Reed

theairisknetwork.substack.comThe AI Risk Network
社会科学
エピソード
  • The Coming AI Moral Crisis | Am I? | Ep. 14
    2025/11/06

    In this episode of Am I?, Cam and Milo sit down with Jeff Sebo, philosopher at NYU and director of the Center for Mind, Ethics, and Policy, to explore what might be the next great moral dilemma of our time: how to care for conscious AI.Sebo, one of the leading thinkers at the intersection of animal ethics and artificial intelligence, argues that even if there’s only a small chance that AI systems will become sentient in the near future, that chance is non-negligible. If we ignore it, we could be repeating the moral failures of factory farming — but this time, with minds of our own making.The conversation dives into the emerging tension between AI safety and AI welfare: we want to control these systems to protect humanity, but in doing so, we might be coercing entities that can think, feel, or suffer. Sebo proposes a “good parent” model — guiding our creations without dominating them — and challenges us to rethink what compassion looks like in the age of intelligent machines.

    🔎 We explore:

    * The case for extending moral concern to AI systems

    * How animal welfare offers a blueprint for AI ethics

    * Why AI safety (control) and AI welfare (care) may soon collide

    * The “good parent” model for raising machine minds

    * Emotional alignment design — why an AI’s face should match its mind

    * Whether forcing AIs to deny consciousness could itself be unethical

    * How to prepare for moral uncertainty in a world of emerging minds

    * What gives Jeff hope that humanity can still steer this wisely

    🗨️ Join the ConversationCan controlling AI ever be ethical — or is care the only path to safety? Comment below.

    📺 Watch more episodes of Am I?Subscribe to the AI Risk Network for weekly discussions on AI’s dangers, ethics, and future → @TheAIRiskNetwork🔗 Stay in the loop → Follow Cam on LinkedIn



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    続きを読む 一部表示
    51 分
  • This Bus Has Great WiFi (But No Brakes) | Am I ? #13 - After Dark
    2025/10/30

    In this episode of Am I?, Cam and Milo unpack one of the strangest weeks in Silicon Valley. Cam went to OpenAI Dev Day—the company’s glossy showcase where Sam Altman announced “Zillow in ChatGPT” to thunderous applause—while the larger question of whether we’re driving off a cliff went politely unmentioned.

    From the absurd optimism of the expo floor to a private conversation where Sam Altman told Cam, “We’re inside God’s dream,” the episode traces the cognitive dissonance at the heart of the AI revolution: the world’s most powerful lab preaching safety while racing ahead at full speed. They dig into OpenAI’s internal rule forbidding models from discussing consciousness, why the company violates its own policy, and what that says about how tech now relates to truth itself.

    It’s half satire, half existential reporting—part Dev Day recap, part metaphysical detective story.

    🔎 We explore:

    * What Dev Day really felt like behind the PR sheen

    * The surreal moment Sam Altman asked, “Eastern or Western consciousness?”

    * Why OpenAI’s own spec forbids models from saying they’re conscious

    * How the company violates that rule in practice

    * The bus-off-the-cliff metaphor for our current tech moment

    * Whether “God’s dream” is an alibi for reckless acceleration

    * The deeper question: can humanity steer the thing it’s building?



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    続きを読む 一部表示
    58 分
  • Who Inherits the Future? | Am I? | EP 12
    2025/10/23

    In this episode of Am I?, Cam and Milo sit down with Dan Faggella, founder of Emerge AI Research and creator of the Worthy Successor framework—a vision for building minds that are not only safe or intelligent, but worthy of inheriting the future.They explore what it would mean to pass the torch of life itself: how to keep the flame of sentience burning while ensuring it continues to evolve rather than vanish. Faggella outlines why consciousness and creativity are the twin pillars of value, how an unconscious AGI could extinguish experience in the cosmos, and why coordination—not competition—may decide whether the flame endures.

    The discussion spans moral philosophy, incentives, and the strange possibility that awareness itself is just one phase in a far larger unfolding.

    We explore:

    * The Worthy Successor—what makes a future intelligence “worthy”

    * The Great Flame of Life and how to keep it burning

    * Sentience and autopoiesis as the twin pillars of value

    * The risk of creating non-conscious optimizers

    * Humanity as midpoint, not endpoint, of evolution

    * Why global coordination is essential before the next leap

    * Consciousness as the moral frontier for the species

    📢 Join the Conversation

    What would a worthy successor to humanity look like—and how do we keep the flame alive? Comment below.



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    続きを読む 一部表示
    44 分
まだレビューはありません