From DW News.
(WARNING: This video contains references to suicide and self-harm.)
The parents of a teenager who took his own life are suing OpenAI, accusing its chatbot, ChatGPT, of encouraging their son to end his life. This isn’t the first case of a vulnerable person dying by suicide after interacting with an AI. OpenAI acknowledges these “heartbreaking cases” and says ChatGPT is trained to direct people to professional help — but admits there have been times when the system didn’t behave as intended.
As AI chatbots grow in popularity, millions are turning to them for therapy, emotional support, and companionship. That raises urgent questions: Can AI truly support people in crisis? When should it hand things over to real experts? We spoke to Laura Reiley about her own story and the questions it raises about the limits of AI.
#chatgpt #aitherapy #openai
For more news go to: http://www.dw.com/en/
Follow DW on social media:
►Instagram: https://www.instagram.com/dwnews
►TikTok: https://www.tiktok.com/@dwnews
►Facebook: https://www.facebook.com/deutschewellenews/
►Twitter: https://twitter.com/dwnews
Für Videos in deutscher Sprache besuchen Sie: https://www.youtube.com/dwdeutsch
Subscribe: https://www.youtube.com/user/deutschewelleenglish?sub_confirmation=1