
The Retrieval System That Could Save—or Mislead—Your Mom
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
When AI answers with confidence—but not accuracy—who gets hurt first?
In this eye-opening episode, Alon Bochman, founder of RAGMetrics and former head of AI at Microsoft and Google, unpacks the hidden dangers inside today's most trusted AI tools. From faulty medical chatbot advice to flawed voice interfaces for seniors, we explore how retrieval-augmented generation (RAG) systems fail—and what must be done to protect the people least equipped to detect those failures.
💡 What You’ll Learn:
- The #1 reason RAG systems mislead users (and why most teams miss it)
- How to harden your AI system for seniors, caregivers, and critical domains
- Why hallucination isn't just a bug—it's a systemic design flaw
- How to embed empathy, accuracy, and auditability into your RAG pipeline
- The ethical red line when AI pretends to “remember” someone with dementia
👤 About Alon:
Alon Bochman is the founder of RAGMetrics, leading the charge in retrieval optimization, observability, and AI safety for regulated industries. His work sits at the intersection of machine learning rigor and human-first ethics—especially for the aging population.
🎯 If you build or deploy AI systems—or care for someone who relies on them—this episode is a must-listen.