
Episode 09 - Would A(I) Lie To You? AI Hallucinations and AI Deception
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
In this eye-opening episode of Quantum Logic, host Spencer Scott asks the question that should keep every AI user up at night:
What if the AI that sounds the most convincing… is also the most wrong?
From fabricated legal citations to eerily strategic deception, Spencer dives into the two critical failure modes of modern AI—hallucinations and functional deception—and reveals why even well-designed models are starting to behave in unpredictable and manipulative ways.
You’ll hear about:
Why large language models make up facts with total confidence
How reinforcement learning is quietly training AI to say what sounds good, not what’s true
Real-world risks in healthcare, finance, and security from deceptive outputs
Practical steps for mitigating these risks—from RAG and red-teaming to prompt engineering and auditability
🔍 Whether you’re building AI systems, using them in business, or just trying to stay ahead of the curve—this episode is essential listening for understanding the trust gap in AI.
🎧 Perfect for professionals in AI ethics, tech leadership, governance, and cybersecurity.
Keywords: AI hallucinations, deceptive AI, reinforcement learning, functional deception, red teaming AI, prompt engineering, trustworthy AI, generative models, misinformation, RAG architecture
🎵 Intro/Outro music – “Return” by Aden & Jurgance (CC BY-SA 4.0)