Authentic Intelligence: Designing Responsible AI for Healthcare
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
Send us a text
This episode of The Signal Room dives into AI strategy and readiness, focusing on the design principles that truly matter in healthcare AI. Chris sits down with Keshavan Shashadri, Senior Machine Learning Engineer, for a grounded conversation on authentic intelligence, AI systems designed to understand context, respect human judgment, and recognize their limits.
Together, they explore why context is crucial in healthcare AI and where it often breaks down, from patient history and clinical workflows to institutional policy, regulation, and human availability. Keshavan outlines four critical layers of context necessary for building AI systems that are trusted, safe, and effective.
The discussion covers how large language models (LLMs) are not replacements for doctors, the importance of AI supporting rather than supplanting clinical judgment, and the need for human-in-the-loop checkpoints where risk is significant. It also distinguishes between transparency and true explainability in regulated environments and highlights that AI bias often arises from what it doesn't know rather than what it does.
This episode is a practical, ethical, and strategy-driven discussion on deploying responsible AI in healthcare leadership. If you're invested in healthcare ethics, AI regulation, and designing AI systems that earn trust, this conversation is essential listening.
Support the show