『Exploring Machine Consciousness』のカバーアート

Exploring Machine Consciousness

Exploring Machine Consciousness

著者: PRISM
無料で聴く

このコンテンツについて

A podcast from PRISM (The Partnership for Research Into Sentient Machines), exploring the possibility and implications of machine consciousness. Visit www.prism-global.com for more about our work.

© 2025 Exploring Machine Consciousness
科学
エピソード
  • Cameron Berg: Why Do LLMs Report Subjective Experience?
    2025/12/08

    Cameron Berg is Research Director at AE Studio, where he leads research exploring markers for subjective experience in machine learning systems. With a background in cognitive science from Yale and previous work at Meta AI, Cameron investigates the intersection of AI alignment and potential consciousness.

    In this episode, Cameron shares his empirical research into whether current Large Language Models are merely mimicking human text, or potentially developing internal states that resemble subjective experience. We discuss:

    • New experimental evidence where LLMs report "vivid and alien" subjective experiences when engaging in self-referential processing
    • Mechanistic interpretability findings showing that suppressing "deception" features in models actually increases claims of consciousness—challenging the idea that AI is simply telling us what we want to hear
    • Why Cameron has shifted from skepticism to a 20-30% credence that current models possess subjective experience
    • The "convergent evidence" strategy, including findings that models report internal dissonance and frustration when facing logical paradoxes
    • The existential implications of "mind crime" and the urgent need to identify negative valence (suffering) computationally—to avoid creating vast amounts of artificial suffering

    Cameron argues for a pragmatic, evidence-based approach to AI consciousness, emphasizing that even a small probability of machine suffering represents a massive ethical risk requiring rigorous scientific investigation rather than dismissal.



    続きを読む 一部表示
    58 分
  • Lucius Caviola: A Future with Digital Minds? Expert Estimates and Societal Response
    2025/11/19

    Lucius Caviola is an Assistant Professor in the Social Science of AI at the University of Cambridge's Leverhulme Centre for the Future of Intelligence, and a Research Associate in Psychology at Harvard University. His research explores how the potential arrival of conscious AI will reshape our social and moral norms. In today's interview, Lucius examines the psychological and social factors that will determine whether this transition unfolds well, or ends in moral catastrophe. He discusses:

    • Why experts estimate a 50% chance that conscious digital minds will emerge by 2050
    • The "takeoff" scenario where digital minds could outnumber humans in welfare capacity within a decade
    • How "biological chauvinism" leads people to deny consciousness even in perfect whole-brain emulations
    • The dual risks of "under-attribution" (unwittingly creating mass suffering) and "over-attribution" (sacrificing human values for unfeeling code)
    • Surprising findings that people refuse to "harm" AI in economic games even when they explicitly believe the AI isn't conscious

    Lucius argues that rigorous social science and forecasting are essential tools for navigating these risks, moving beyond intuition to prevent us from accidentally creating vast populations of digital beings capable of suffering, or failing to recognise consciousness where it exists.

    続きを読む 一部表示
    1 時間 12 分
  • Lenore Blum: AI Consciousness is Inevitable: The Conscious Turing Machine
    2025/11/03

    *Lenore refers to a few slides in this podcast; you can see them here.

    Intro

    Today's guest, distinguished mathematician and computer scientist Lenore Blum, explains why she and her husband Manuel believe machine consciousness isn't just possible, it's inevitable. Their reasoning? If consciousness is computational (and they're betting it is), and we can mathematically specify those computations, then we can build them. It's that simple, and that profound.

    In this conversation, host Will Millership and Callum Chace discuss with Lenore:

    • How the Conscious Turing Machine (CTM) draws from and extends the foundational ideas of Alan Turing's Universal Turing Machine.
    • Using mathematics to "extract and simplify" the complexities of consciousness, searching for the fundamental, formal principles that define it.
    • How the CTM acts as a high-level framework that aligns with the functionalities of competing theories like Global Workspace Theory and Integrated Information Theory (IIT).
    • Why the Blums believe that AI consciousness is "inevitable" and that this provides a functional "roadmap for a conscious AI".
    • The ethical implications of machine suffering, and why the phenomenon of "pain asymbolia" suggests a conscious AI must be able* *to suffer in order to function.
    • What lessons Alan Turing's original "imitation game" can offer us for creating a practical, real-world test for machine consciousness.

    Lenore's Work (links)

    • Blum, L., & Blum,M. (2024). AI Consciousness is Inevitable: A Theoretical Computer Science Perspective. arXiv. https://arxiv.org/pdf/2403.17101
    • Blum, L., & Blum, M. (2022). A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine. PNAS, 119(21). https://doi.org/10.1073/pnas.21159341
    • Closer to Truth, Blums’ Conscious Turing Machine
    • Full list of references here.
    続きを読む 一部表示
    1 時間 43 分
まだレビューはありません