OpenAI forms a new team to study child safety

Share via:


Under scrutiny from activists — and parents — OpenAI has formed a new team to study ways to prevent its AI tools from being misused or abused by kids.

In a new job listing on its career page, OpenAI reveals the existence of a Child Safety team, which the company says is working with platform policy, legal and investigations groups within OpenAI as well as outside partners to manage “processes, incidents, and reviews” relating to underage users.

The team is currently looking to hire a child safety enforcement specialist, who’ll be responsible for applying OpenAI’s policies in the context of AI-generated content and working on review processes related to “sensitive” (presumably kid-related) content.

Tech vendors of a certain size dedicate a fair amount of resources to complying with laws like the U.S. Children’s Online Privacy Protection Rule, which mandate controls over what kids can — and can’t — access on the web as well as what sorts of data companies can collect on them. So the fact that OpenAI’s hiring child safety experts doesn’t come as a complete surprise, particularly if the company expects a significant underage user base one day. (OpenAI’s current terms of use require parental consent for children ages 13 to 18 and prohibit use for kids under 13.)

But the formation of the new team, which comes several weeks after OpenAI announced a partnership with Common Sense Media to collaborate on kid-friendly AI guidelines and landed its first education customer, also suggests a wariness on OpenAI’s part of running afoul of policies pertaining to minors’ use of AI — and negative press.

Kids and teens are increasingly turning to GenAI tools for help not only with schoolwork but personal issues. According to a poll from the Center for Democracy and Technology, 29% of kids report having used ChatGPT to deal with anxiety or mental health issues, 22% for issues with friends and 16% for family conflicts.

Some see this as a growing risk.

Last summer, schools and colleges rushed to ban ChatGPT over plagiarism and misinformation fears. Since then, some have reversed their bans. But not all are convinced of GenAI’s potential for good, pointing to surveys like the U.K. Safer Internet Centre’s, which found that over half of kids (53%) report having seen people their age use GenAI in a negative way — for example creating believable false information or images used to upset someone.

In September, OpenAI published documentation for ChatGPT in classrooms with prompts and an FAQ to offer educator guidance on using GenAI as a teaching tool. In one of the support articles, OpenAI acknowledged that its tools, specifically ChatGPT, “may produce output that isn’t appropriate for all audiences or all ages” and advised “caution” with exposure to kids — even those who meet the age requirements.

Calls for guidelines on kid usage of GenAI are growing.

The UN Educational, Scientific and Cultural Organization (UNESCO) late last year pushed for governments to regulate the use of GenAI in education, including implementing age limits for users and guardrails on data protection and user privacy. “Generative AI can be a tremendous opportunity for human development, but it can also cause harm and prejudice,” Audrey Azoulay, UNESCO’s director-general, said in a press release. “It cannot be integrated into education without public engagement and the necessary safeguards and regulations from governments.”



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

OpenAI forms a new team to study child safety


Under scrutiny from activists — and parents — OpenAI has formed a new team to study ways to prevent its AI tools from being misused or abused by kids.

In a new job listing on its career page, OpenAI reveals the existence of a Child Safety team, which the company says is working with platform policy, legal and investigations groups within OpenAI as well as outside partners to manage “processes, incidents, and reviews” relating to underage users.

The team is currently looking to hire a child safety enforcement specialist, who’ll be responsible for applying OpenAI’s policies in the context of AI-generated content and working on review processes related to “sensitive” (presumably kid-related) content.

Tech vendors of a certain size dedicate a fair amount of resources to complying with laws like the U.S. Children’s Online Privacy Protection Rule, which mandate controls over what kids can — and can’t — access on the web as well as what sorts of data companies can collect on them. So the fact that OpenAI’s hiring child safety experts doesn’t come as a complete surprise, particularly if the company expects a significant underage user base one day. (OpenAI’s current terms of use require parental consent for children ages 13 to 18 and prohibit use for kids under 13.)

But the formation of the new team, which comes several weeks after OpenAI announced a partnership with Common Sense Media to collaborate on kid-friendly AI guidelines and landed its first education customer, also suggests a wariness on OpenAI’s part of running afoul of policies pertaining to minors’ use of AI — and negative press.

Kids and teens are increasingly turning to GenAI tools for help not only with schoolwork but personal issues. According to a poll from the Center for Democracy and Technology, 29% of kids report having used ChatGPT to deal with anxiety or mental health issues, 22% for issues with friends and 16% for family conflicts.

Some see this as a growing risk.

Last summer, schools and colleges rushed to ban ChatGPT over plagiarism and misinformation fears. Since then, some have reversed their bans. But not all are convinced of GenAI’s potential for good, pointing to surveys like the U.K. Safer Internet Centre’s, which found that over half of kids (53%) report having seen people their age use GenAI in a negative way — for example creating believable false information or images used to upset someone.

In September, OpenAI published documentation for ChatGPT in classrooms with prompts and an FAQ to offer educator guidance on using GenAI as a teaching tool. In one of the support articles, OpenAI acknowledged that its tools, specifically ChatGPT, “may produce output that isn’t appropriate for all audiences or all ages” and advised “caution” with exposure to kids — even those who meet the age requirements.

Calls for guidelines on kid usage of GenAI are growing.

The UN Educational, Scientific and Cultural Organization (UNESCO) late last year pushed for governments to regulate the use of GenAI in education, including implementing age limits for users and guardrails on data protection and user privacy. “Generative AI can be a tremendous opportunity for human development, but it can also cause harm and prejudice,” Audrey Azoulay, UNESCO’s director-general, said in a press release. “It cannot be integrated into education without public engagement and the necessary safeguards and regulations from governments.”



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Mixed Week For New-Age Tech Stocks Amid Bloodbath In...

SUMMARY CarTrade emerged as the biggest loser among the...

Zerodha, Groww’s Revenue Conundrum

One often gets asked: where are the profitable...

Will Satoshi be doxxed? Banks to join SWIFT digital...

HBO’s Money Electric: The Bitcoin Mystery, Banks to...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!