
Does ChatGPT have a mind?
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
Do large language models like ChatGPT actually understand what they're saying? Can AI systems have beliefs, desires, or even consciousness? Philosophers Henry Shevlin and Alex Grzankowski debunk the common arguments against LLM minds and explore whether these systems genuinely think.This episode examines popular objections to AI consciousness - from "they're just next token predictors" to "it's just matrix multiplication" - and explains why these arguments fail. The conversation covers the Moses illusion, competence vs performance, the intentional stance, and whether we're applying unfair double standards to AI that we wouldn't apply to humans or animals.Key topics discussed:
- Why "just next token prediction" isn't a good argument against LLM minds
- The competence vs performance distinction in cognitive science
- How humans make similar errors to LLMs (Moses illusion, conjunction fallacy)
- Whether LLMs can have beliefs, preferences, and understanding
- The difference between base models and fine-tuned chatbots
- Why consciousness in LLMs remains unlikely despite other mental states
Featured paper: "Deflating Deflationism: A Critical Perspective on Debunking Arguments Against LLM Mentality"Authored by Alex Grzankowski, Geoff Keeling, Henry Shevlin and Winnie Street
Guests:Henry Shevlin - Philosopher and AI ethicist at the Leverhulme Centre for the Future of Intelligence, University of CambridgeAlex Grzankowski - Philosopher at King's College London#AI #Philosophy #Consciousness #LLM #ArtificialIntelligence #ChatGPT #MachineLearning #CognitiveScience