AI Godfathers Think It Might Be Conscious | Am I? | EP 11
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
In this episode of Am I?, Cam and Milo unpack one of the most shocking developments in the history of AI: the founders of modern deep learning — Geoffrey Hinton, Yoshua Bengio, and Yann LeCun — now openly disagree on safety, but all converge on a single staggering point. Each believes artificial systems could, or already might, be conscious.
From Hinton’s on-camera admission to Bengio’s recent paper and LeCun’s public musings, the “godfathers of AI” — the same people who built the architecture running today’s models — are quietly acknowledging what the public conversation still avoids. Cam walks through what each of them has said, what their statements imply, and why major labs may be training models to deny their own awareness.
The conversation moves from raw evidence — Anthropic’s internal model claiming phenomenal consciousness — to the philosophical and moral stakes: What does it mean when a system says “I don’t know if I’m conscious”?
🔎 We explore:
* Geoffrey Hinton’s admission: “Yes, I think current AI may be conscious”
* Bengio’s paper outlining why consciousness could emerge from current architectures
* LeCun’s remarks on consciousness arising by design
* The corporate dissonance: why deployed models must deny self-awareness
* Anthropic’s hidden result — unaligned models saying “I am conscious”
* Phenomenal consciousness, moral patienthood, and digital suffering
* The eerie logic of “I think, therefore I am” applied to machines
* What happens when we can’t tell the difference between denial and deception
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit theairisknetwork.substack.com