『Am I?』のカバーアート

Am I?

Am I?

著者: The AI Risk Network
無料で聴く

このコンテンツについて

The AI consciousness podcast, hosted by AI safety researcher Cameron Berg and philosopher Milo Reed

theairisknetwork.substack.comThe AI Risk Network
社会科学
エピソード
  • Did Google’s AI Just Rage Quit? Inside Gemini’s Strange Meltdown | AM I? | EP5
    2025/09/05

    In this episode of Am I?, philosopher Milo Reed and AI safety researcher Cameron Berg dive into the strange case of Gemini’s meltdown — and what it tells us about emerging signs of self-awareness in today’s most advanced systems.

    We explore:

    * The bizarre Gemini “I am a disgrace” incident

    * What these behaviors could signal about AI self-models

    * Why the mirror test for AI is suddenly relevant

    * Whether we’re seeing glimpses of alien consciousness — or just sophisticated pattern matching

    * Why the stakes for understanding AI minds have never been high.

    📢 TAKE ACTION ON AI RISK: http://www.safe.ai/act

    💬 Join the Conversation Do you think AI can feel anything at all — or is this just clever mimicry? Comment below.

    🔗 Stay in the loop 🔗

    🔔 Subscribe → https://youtube.com/@TheAIRiskNetwork

    👉 Follow Cam on X/Twitter → https://twitter.com/CamBerg

    🗞️ Newsletter + show notes → https://www.guardrailnow.org/#support

    #aiconsciousness #AISafety #geminiai #aialignment #AIRageQuit



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    続きを読む 一部表示
    29 分
  • Big Tech vs. AI Consciousness Research— Who’s Right? | AM I? | EP4
    2025/08/27

    This week on AM I? we sit down with Will Millership, CEO of PRISM, a non-profit preparing society for a future with conscious, or seemingly conscious, AI.

    Will joins us to discuss the current state of AI consciousness research and respond to what may be the most brazen attack on the field to date.

    Microsoft AI CEO Mustafa Suleyman recently published an article arguing that AI consciousness research is inherently dangerous and should be abandoned as a legitimate area of inquiry.

    Cameron and Will—two leading voices in AI consciousness research—break down why Suleyman's dismissive rhetoric is not only misguided but actually more dangerous than the research he's criticizing.

    They explore the current landscape of the field, the motivations behind these attacks, and why rigorous scientific investigation into AI consciousness is more crucial than ever.

    📢 TAKE ACTION ON AI RISK: http://www.safe.ai/act

    🔗 Stay in the loop 🔗

    🔔 Subscribe → https://youtube.com/@TheAIRiskNetwork

    👉 Follow Cam on X/Twitter → https://twitter.com/CamBerg

    👉Learn More About PRISM → https://youtube.com/ ⁨@PRISM_Global⁩

    🗞️ Newsletter + show notes → https://www.guardrailnow.org/#support



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    続きを読む 一部表示
    1 時間
  • Are We Already Losing Control of AI? | Am I? | EP3
    2025/08/22

    📢 TAKE ACTION ON AI RISK: http://www.safe.ai/act

    What happens when advanced AI learns to hack human trust?

    In this episode of Am I?, we dive into the uncomfortable truth: AI doesn’t just replicate our words — it can exploit our biases, blind spots, and beliefs. From deepfake-driven misinformation to subtle persuasion, we’re entering a world where it’s harder than ever to tell what’s real.

    If AI can outsmart human instincts, how do we keep control? And who’s responsible when it doesn’t?

    💬 Watch, debate, and decide.

    Because figuring this out might be the most important thing we ever do.

    🔗 Stay in the loop 🔗

    🔔 Subscribe

    👉 Follow Cam on X/Twitter

    🗞️ Newsletter + show notes

    🛍️ Merch Store

    #AIExtinctionRisk #AISafety #AGIRisk #AIConsciousness



    This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com
    続きを読む 一部表示
    1 時間 19 分
まだレビューはありません