Anthropic now lets kids use its AI tech — within limits

Share via:


AI startup Anthropic is changing its policies to allow minors to use its generative AI systems — in certain circumstances, at least. 

Announced in a post on the company’s official blog Friday, Anthropic will begin letting teens and preteens use third-party apps (but not its own apps, necessarily) powered by its AI models so long as the developers of those apps implement specific safety features and disclose to users which Anthropic technologies they’re leveraging.

In a support article, Anthropic lists several safety measures devs creating AI-powered apps for minors should include, like age verification systems, content moderation and filtering and educational resources on “safe and responsible” AI use for minors. The company also says that it may make available “technical measures” intended to tailor AI product experiences for minors, like a “child-safety system prompt” that developers targeting minors would be required to implement. 

Devs using Anthropic’s AI models will also have to comply with “applicable” child safety and data privacy regulations such as the Children’s Online Privacy Protection Act (COPPA), the U.S. federal law that protects the online privacy of children under 13. Anthropic says it plans to “periodically” audit apps for compliance, suspending or terminating the accounts of those who repeatedly violate the compliance requirement, and mandate that developers “clearly state” on public-facing sites or documentation that they’re in compliance. 

“There are certain use cases where AI tools can offer significant benefits to younger users, such as test preparation or tutoring support,” Anthropic writes in the post. “With this in mind, our updated policy allows organizations to incorporate our API into their products for minors.”

Anthropic’s change in policy comes as kids and teens are increasingly turning to generative AI tools for help not only with schoolwork but personal issues, and as rival generative AI vendors — including Google and OpenAI — are exploring more use cases aimed at children. This year, OpenAI formed a new team to study child safety and announced a partnership with Common Sense Media to collaborate on kid-friendly AI guidelines. And Google made its chatbot Bard, since rebranded to Gemini, available to teens in English in selected regions.

According to a poll from the Center for Democracy and Technology, 29% of kids report having used generative AI like OpenAI’s ChatGPT to deal with anxiety or mental health issues, 22% for issues with friends and 16% for family conflicts.

Last summer, schools and colleges rushed to ban generative AI apps — in particular ChatGPT — over fears of plagiarism and misinformation. Since then, some have reversed their bans. But not all are convinced of generative AI’s potential for good, pointing to surveys like the U.K. Safer Internet Centre’s, which found that over half of kids (53%) report having seen people their age use generative AI in a negative way — for example creating believable false information or images used to upset someone (including pornographic deepfakes).

Calls for guidelines on kid usage of generative AI are growing.

The UN Educational, Scientific and Cultural Organization (UNESCO) late last year pushed for governments to regulate the use of generative AI in education, including implementing age limits for users and guardrails on data protection and user privacy. “Generative AI can be a tremendous opportunity for human development, but it can also cause harm and prejudice,” Audrey Azoulay, UNESCO’s director-general, said in a press release. “It cannot be integrated into education without public engagement and the necessary safeguards and regulations from governments.”



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Anthropic now lets kids use its AI tech — within limits


AI startup Anthropic is changing its policies to allow minors to use its generative AI systems — in certain circumstances, at least. 

Announced in a post on the company’s official blog Friday, Anthropic will begin letting teens and preteens use third-party apps (but not its own apps, necessarily) powered by its AI models so long as the developers of those apps implement specific safety features and disclose to users which Anthropic technologies they’re leveraging.

In a support article, Anthropic lists several safety measures devs creating AI-powered apps for minors should include, like age verification systems, content moderation and filtering and educational resources on “safe and responsible” AI use for minors. The company also says that it may make available “technical measures” intended to tailor AI product experiences for minors, like a “child-safety system prompt” that developers targeting minors would be required to implement. 

Devs using Anthropic’s AI models will also have to comply with “applicable” child safety and data privacy regulations such as the Children’s Online Privacy Protection Act (COPPA), the U.S. federal law that protects the online privacy of children under 13. Anthropic says it plans to “periodically” audit apps for compliance, suspending or terminating the accounts of those who repeatedly violate the compliance requirement, and mandate that developers “clearly state” on public-facing sites or documentation that they’re in compliance. 

“There are certain use cases where AI tools can offer significant benefits to younger users, such as test preparation or tutoring support,” Anthropic writes in the post. “With this in mind, our updated policy allows organizations to incorporate our API into their products for minors.”

Anthropic’s change in policy comes as kids and teens are increasingly turning to generative AI tools for help not only with schoolwork but personal issues, and as rival generative AI vendors — including Google and OpenAI — are exploring more use cases aimed at children. This year, OpenAI formed a new team to study child safety and announced a partnership with Common Sense Media to collaborate on kid-friendly AI guidelines. And Google made its chatbot Bard, since rebranded to Gemini, available to teens in English in selected regions.

According to a poll from the Center for Democracy and Technology, 29% of kids report having used generative AI like OpenAI’s ChatGPT to deal with anxiety or mental health issues, 22% for issues with friends and 16% for family conflicts.

Last summer, schools and colleges rushed to ban generative AI apps — in particular ChatGPT — over fears of plagiarism and misinformation. Since then, some have reversed their bans. But not all are convinced of generative AI’s potential for good, pointing to surveys like the U.K. Safer Internet Centre’s, which found that over half of kids (53%) report having seen people their age use generative AI in a negative way — for example creating believable false information or images used to upset someone (including pornographic deepfakes).

Calls for guidelines on kid usage of generative AI are growing.

The UN Educational, Scientific and Cultural Organization (UNESCO) late last year pushed for governments to regulate the use of generative AI in education, including implementing age limits for users and guardrails on data protection and user privacy. “Generative AI can be a tremendous opportunity for human development, but it can also cause harm and prejudice,” Audrey Azoulay, UNESCO’s director-general, said in a press release. “It cannot be integrated into education without public engagement and the necessary safeguards and regulations from governments.”



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Runway announces an API for its video-generating AI models

Runway, one of several AI startups developing video-generating...

GIFT City: Infosys, Wipro to start fintech hubs in...

Infosys and Wipro will be among the first...

BitGo launches regulated custody platform for native protocol tokens

The US custodian’s crypto-native clients include Worldcoin, ZetaChain,...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!