OpenAI has confirmed that conversations on ChatGPT that indicate a risk of serious physical harm to others may be reviewed by human moderators and, in extreme cases, referred to the police.
The company outlined these measures in a recent class=”backlink” target=”_blank” href=”https://openai.com/index/helping-people-when-they-need-it-most/” data-vars-page-type=”story” data-vars-link-type=”Manual” data-vars-anchor-text=”blogpost”>blogpost explaining how the AI handles sensitive interactions and potential safety risks.
Rules for self-harm and threats to others
Claiming that class=”backlink” target=”_blank” href=”https://www.livemint.com/ai/how-chatgpt-convinced-techie-to-kill-mother-and-self-best-friend-ai-gives-dangerous-fatal-advice-11756628450841.html” data-vars-page-type=”story” data-vars-link-type=”Manual” data-vars-anchor-text=”ChatGPT”>ChatGPT is designed to provide empathetic support to users experiencing distress, OpenAI stressed that its safeguards differentiate between self-harm and threats to others….

![[CITYPNG.COM]White Google Play PlayStore Logo – 1500×1500](https://startupnews.fyi/wp-content/uploads/2025/08/CITYPNG.COMWhite-Google-Play-PlayStore-Logo-1500x1500-1-630x630.png)