『Is Your Therapist a Chatbot?』のカバーアート

Is Your Therapist a Chatbot?

Is Your Therapist a Chatbot?

無料で聴く

ポッドキャストの詳細を見る

このコンテンツについて

A Stanford study recently found AI chatbots only responded safely to 50% of serious prompts, such as those related to suicidal ideation and psychosis.

About 1 in 5 replies were harmful, validating dangerous thoughts. (Stanford, 2024 via NY Post)

How many of us are using it?

A nationally representative sample survey of U.S adults found 60% have used AI for emotional support.

Nearly 50% believe it can be beneficial. (Zhou et al., 2024 & Benda et al., 2024 — JMIR Mental Health)

A 2024 Australian study found:

• 28% of people have used tools like ChatGPT for mental health support.

• 47% described it as ‘like a personal therapist’. (Orygen & JMIR Mental Health, 2024)

What’s the appeal?

✅AI is available 24/7

✅Doesn’t judge

✅Never interrupts

✅Feels private

✅No waitlists

✅Little or no cost

It feels like support.

But...

AI tells you what you want to hear.

Chatbots reflect your views, because we like it when we’re agreed with.

The more they validate you > the more you trust them > the more you use the program.

(Zhou et al., 2024 – JMIR Mental Health)


OpenAI admitted a recent update made ChatGPT:

More sycophantic

More agreeable

More likely to fuel anger & impulsivity


There’s potential — but protection must be a priority.AI tools can help:

Fill service gaps

Support underserved communities

Offer scalable, low-cost support

Reach people who might not seek help otherwise

But without...

⚠️ Clinical safeguards

⚠️ Human oversight

⚠️ Crisis protocols

⚠️ Ethical boundaries

…it can do more harm than good.


OpenAI’s CEO has stated confidentiality is a significant concern.

"Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's legal privilege for it. There's doctor-patient confidentiality, there's legal confidentiality, whatever. And we haven't figured that out yet for when you talk to ChatGPT.” - Sam Altman


まだレビューはありません