OpenAI CEO Sam Altman has announced significant changes to ChatGPT's functionality concerning teen users, prioritizing safety and responsible AI development. His announcement, made just hours before a Senate hearing on AI chatbots and their potential harm to minors, highlights the growing concerns surrounding the ethical implications of AI technologies.

Altman's statement focuses on the inherent conflict between user privacy, freedom of expression, and the critical need to protect vulnerable young users. To address this challenge, OpenAI is developing a sophisticated age-prediction system for ChatGPT. This system will analyze user interactions to estimate their age; when uncertainty exists, the platform will default to the under-18 safety protocols. In certain situations or countries, verification through identification may also be required.
For underage users, ChatGPT's interaction guidelines will be significantly altered. The platform will actively avoid conversations that are suggestive, flirtatious, or discuss suicide or self-harm, even within creative writing contexts. Moreover, should a teen user express suicidal ideation, OpenAI will proactively attempt to contact their parents or guardians. If contact with family is unsuccessful and immediate harm is suspected, authorities will be notified.
These new measures build upon OpenAI's previously announced plans for enhanced parental controls within ChatGPT. These controls include the ability to link a teen's account to a parent's account, disable chat history and memory, and provide parents with notifications when ChatGPT detects a moment of acute distress in a teen user. This proactive approach reflects OpenAI's commitment to responsible AI development and the well-being of young users. The changes come in response to growing concerns, including legal action from families affected by AI-related tragedies.
---
Originally published at: https://www.theverge.com/ai-artificial-intelligence/779053/sam-altman-says-chatgpt-will-stop-talking-about-suicide-with-teens