『OpenAI: Why LLM Hallucinates and How Our Tests Make It Worse』のカバーアート

OpenAI: Why LLM Hallucinates and How Our Tests Make It Worse

OpenAI: Why LLM Hallucinates and How Our Tests Make It Worse

無料で聴く

ポッドキャストの詳細を見る

このコンテンツについて

Why do AI chatbots confidently make up facts?

This podcast explores the surprising reasons language models 'hallucinate'. We'll uncover how these plausible falsehoods originate from statistical errors during pretraining and persist because evaluations reward guessing over acknowledging uncertainty. Learn why models are optimized to be good test-takers, much like students guessing on an exam, and what it takes to build more trustworthy AI systems.

まだレビューはありません