In the last few months, OpenAI has been under fire for its chatbot giving harmful answers to users in a bid to level up its usage. For its part, OpenAI has also implemented a few safeguards in the AI like parental controls, age filtering, reminders to take a break and distress recognition.
However, new research by King’s College London and the Association of Clinical Psychologists UK in partnership with the Guardian says that the AI chatbot still fails to identify risky behaviour when communicating with mentally ill people.
The researchers also note…

![[CITYPNG.COM]White Google Play PlayStore Logo – 1500×1500](https://startupnews.fyi/wp-content/uploads/2025/08/CITYPNG.COMWhite-Google-Play-PlayStore-Logo-1500x1500-1-630x630.png)