Ep.2 - Should chatbots report Suicidal Thoughts?
カートのアイテムが多すぎます
カートに追加できませんでした。
ウィッシュリストに追加できませんでした。
ほしい物リストの削除に失敗しました。
ポッドキャストのフォローに失敗しました
ポッドキャストのフォロー解除に失敗しました
-
ナレーター:
-
著者:
このコンテンツについて
In this gripping episode, suicide-prevention expert Dr. Stacey Freedenthal confronts the haunting question: Should AI chatbots report when a user reveals suicidal thoughts? Drawing on her lived experience of secrecy, her clinical expertise, and her recent writing on AI, Stacey unpacks the chilling reality—people are already confiding life-or-death thoughts in chatbots. But when AI feels human, the risks can turn deadly.
We dive deep into tragic real-world cases, the paradox of bots that over-praise instead of challenge, and the unsettling possibility of machines helping people hide their intent.
Stacey reveals what companies must do to safeguard vulnerable users and why no algorithm can replace human connection. This is a provocative, must-hear conversation about secrecy, stigma, and the future of suicide prevention in an AI-driven world.
Watch this episode on my YouTube Channel and let me know your thoughts:
https://youtu.be/_7Z2LM4UQGU
Follow us on Instagram: @relatingtoai
Do you have a story or insights to share? Please send an email to: paula@relatingtoai.com
www.relatingtoai.com