エピソード

  • Cameron Berg: Why Do LLMs Report Subjective Experience?
    2025/12/08

    Cameron Berg is Research Director at AE Studio, where he leads research exploring markers for subjective experience in machine learning systems. With a background in cognitive science from Yale and previous work at Meta AI, Cameron investigates the intersection of AI alignment and potential consciousness.

    In this episode, Cameron shares his empirical research into whether current Large Language Models are merely mimicking human text, or potentially developing internal states that resemble subjective experience. We discuss:

    • New experimental evidence where LLMs report "vivid and alien" subjective experiences when engaging in self-referential processing
    • Mechanistic interpretability findings showing that suppressing "deception" features in models actually increases claims of consciousness—challenging the idea that AI is simply telling us what we want to hear
    • Why Cameron has shifted from skepticism to a 20-30% credence that current models possess subjective experience
    • The "convergent evidence" strategy, including findings that models report internal dissonance and frustration when facing logical paradoxes
    • The existential implications of "mind crime" and the urgent need to identify negative valence (suffering) computationally—to avoid creating vast amounts of artificial suffering

    Cameron argues for a pragmatic, evidence-based approach to AI consciousness, emphasizing that even a small probability of machine suffering represents a massive ethical risk requiring rigorous scientific investigation rather than dismissal.



    続きを読む 一部表示
    58 分
  • Lucius Caviola: A Future with Digital Minds? Expert Estimates and Societal Response
    2025/11/19

    Lucius Caviola is an Assistant Professor in the Social Science of AI at the University of Cambridge's Leverhulme Centre for the Future of Intelligence, and a Research Associate in Psychology at Harvard University. His research explores how the potential arrival of conscious AI will reshape our social and moral norms. In today's interview, Lucius examines the psychological and social factors that will determine whether this transition unfolds well, or ends in moral catastrophe. He discusses:

    • Why experts estimate a 50% chance that conscious digital minds will emerge by 2050
    • The "takeoff" scenario where digital minds could outnumber humans in welfare capacity within a decade
    • How "biological chauvinism" leads people to deny consciousness even in perfect whole-brain emulations
    • The dual risks of "under-attribution" (unwittingly creating mass suffering) and "over-attribution" (sacrificing human values for unfeeling code)
    • Surprising findings that people refuse to "harm" AI in economic games even when they explicitly believe the AI isn't conscious

    Lucius argues that rigorous social science and forecasting are essential tools for navigating these risks, moving beyond intuition to prevent us from accidentally creating vast populations of digital beings capable of suffering, or failing to recognise consciousness where it exists.

    続きを読む 一部表示
    1 時間 12 分
  • Lenore Blum: AI Consciousness is Inevitable: The Conscious Turing Machine
    2025/11/03

    *Lenore refers to a few slides in this podcast; you can see them here.

    Intro

    Today's guest, distinguished mathematician and computer scientist Lenore Blum, explains why she and her husband Manuel believe machine consciousness isn't just possible, it's inevitable. Their reasoning? If consciousness is computational (and they're betting it is), and we can mathematically specify those computations, then we can build them. It's that simple, and that profound.

    In this conversation, host Will Millership and Callum Chace discuss with Lenore:

    • How the Conscious Turing Machine (CTM) draws from and extends the foundational ideas of Alan Turing's Universal Turing Machine.
    • Using mathematics to "extract and simplify" the complexities of consciousness, searching for the fundamental, formal principles that define it.
    • How the CTM acts as a high-level framework that aligns with the functionalities of competing theories like Global Workspace Theory and Integrated Information Theory (IIT).
    • Why the Blums believe that AI consciousness is "inevitable" and that this provides a functional "roadmap for a conscious AI".
    • The ethical implications of machine suffering, and why the phenomenon of "pain asymbolia" suggests a conscious AI must be able* *to suffer in order to function.
    • What lessons Alan Turing's original "imitation game" can offer us for creating a practical, real-world test for machine consciousness.

    Lenore's Work (links)

    • Blum, L., & Blum,M. (2024). AI Consciousness is Inevitable: A Theoretical Computer Science Perspective. arXiv. https://arxiv.org/pdf/2403.17101
    • Blum, L., & Blum, M. (2022). A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine. PNAS, 119(21). https://doi.org/10.1073/pnas.21159341
    • Closer to Truth, Blums’ Conscious Turing Machine
    • Full list of references here.
    続きを読む 一部表示
    1 時間 43 分
  • Clara Colombatto: Perceptions of Consciousness, Intelligence, and Trust in Large Language Models
    2025/10/13

    Clara is an Assistant Professor in the Department of Psychology at the University of Waterloo in Canada, where she directs the Vision and Cognition Lab.

    Her lab is investigating various aspects of perception and cognition, with a particular focus on the perception of other minds and the visual roots of social cognition.

    The lab is also exploring how we can perceive not just others’ perceptual and cognitive states, but also their metacognitive states such as awareness, confidence, or uncertainty — and how such impressions facilitate communication and collaboration.

    Useful links:

    • Clara Colombatto personal website.
    • Vision and Cognition Lab website.
    • Folk psychological attributions of consciousness to large language models. Article.
    • Illusions of Confidence in Artificial Systems. Article.
    • PRISM website.
    続きを読む 一部表示
    47 分
  • Keith Frankish: Illusionism and Its Implications for Conscious AI
    2025/09/10

    Keith is an Honorary Professor in the Philosophy Department at the University of Sheffield, a Visiting Research Fellow with The Open University, and an Adjunct Professor with the Brain and Mind Programme at the University of Crete.

    Keith is best known for his theory of illusionism—the view that phenomenal consciousness, or the subjective feeling of experience, is an illusion. Rather than denying that we have conscious experiences, Keith argues that our intuitive conception of them as inherently mysterious or non-physical is mistaken.

    続きを読む 一部表示
    1 時間 6 分
  • Mark Solms: Engineering Consciousness – Can Robots "Give a Damn?"
    2025/08/07

    In this episode, we ask: if we wanted to construct a conscious mind from scratch, what would we need? That is the question our guest, Professor Mark Solms, addressed in the final chapter of his book The Hidden Spring - a Journey to the Source of Consciousness.

    Mark is a Professor in Neuropsychology at the University of Cape Town, and is president of the South African Psychoanalytical Association. He is also an advisor to PRISM and Conscium.

    Mark has contributed significantly to our understanding of consciousness through his pioneering research in the field of neuropsychoanalysis, which integrates Freudian theory with findings from contemporary neuroscience.

    続きを読む 一部表示
    1 時間 2 分
  • Jeff Sebo: AI Sentience, Welfare and Moral Status
    2025/07/14

    In this episode, we spoke to Jeff Sebo of New York University. Jeff is the author of the recently published book The Moral Circle: Who Matters, What Matters and Why.

    In it, he challenges us to expand our moral concern beyond the boundaries of species and substrate. He has also co-authored a number of papers arguing that AI welfare is an issue that needs to be taken seriously today.

    Links:

    • The Moral Circle: Who Matters, What Matters and Why. Book.
    • Jeff Sebo personal website.
    • Moral consideration for AI systems by 2030. (Paper).
    • Is there a tension between AI safety and AI welfare?. (Paper)
    • Prism website.
    続きを読む 一部表示
    46 分
  • Susan Schneider: Organoids, LLMs, and tests for AI consciousness
    2025/06/23

    In the second episode of Exploring Machine Consciousness, we welcomed philosopher and AI expert Professor Susan Schneider to discuss consciousness, quantum physics, and the future of conscious machines.

    Susan introduces her “Superpsychism” theory, exploring quantum coherence as a basis for consciousness, and explains why the AI Consciousness Test (ACT) could help determine whether machines truly have experiences - or are just mimicking human responses.

    Susan is sceptical that current LLMs show any convincing signs of consciousness. But believes that hybrid systems of organoids and LLMs could be compelling candidates for consciousness.

    Links:

    • Susan Schneider's website
    • Superpsychism, paper by Susan Schneider & Mark Bailey
    • Artificial You: AI and the Future of Your Mind, book by Susan Schneider
    • Chatbot Epistemology, paper by Susan Schneider
    • If a Chatbot Tells You It Is Conscious, Should You Believe It?, article in the Scientific American
    • The Easy Part of the Hard Problem: A Resonance Theory of Consciousness, paper by Tam Hunt and Jonathan Schooler
    • PRISM website: https://www.prism-global.com/
    続きを読む 一部表示
    45 分