OpenAI buffs safety team and gives board veto power on risky AI

Share via:

 ​

OpenAI is expanding its internal safety processes to fend off the threat of harmful AI. A new “safety advisory group” will sit above the technical teams and make recommendations to leadership, and the board has been granted veto power — of course, whether it will actually use it is another question entirely. Normally the ins […]

© 2023 TechCrunch. All rights reserved. For personal use only.

OpenAI is expanding its internal safety processes to fend off the threat of harmful AI. A new “safety advisory group” will sit above the technical teams and make recommendations to leadership, and the board has been granted veto power — of course, whether it will actually use it is another question entirely. Normally the ins
© 2023 TechCrunch. All rights reserved. For personal use only.  

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

OpenAI buffs safety team and gives board veto power on risky AI

 ​

OpenAI is expanding its internal safety processes to fend off the threat of harmful AI. A new “safety advisory group” will sit above the technical teams and make recommendations to leadership, and the board has been granted veto power — of course, whether it will actually use it is another question entirely. Normally the ins […]

© 2023 TechCrunch. All rights reserved. For personal use only.

OpenAI is expanding its internal safety processes to fend off the threat of harmful AI. A new “safety advisory group” will sit above the technical teams and make recommendations to leadership, and the board has been granted veto power — of course, whether it will actually use it is another question entirely. Normally the ins
© 2023 TechCrunch. All rights reserved. For personal use only.  

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

IMTS Graduate Achieves Record-Breaking £42,000 International Placement in AI...

New Delhi , June 18:  The Institute of Management...

MakeMyTrip Eyes $2B+ Funding to Dilute Chinese Ownership

MakeMyTrip has announced plans to raise over $2 billion...

AI to reduce Amazon’s corporate workforce: CEO Andy Jassy...

Amazon chief Andy Jassy has told employees that...

Popular

Upcoming Events

bbbb bbbb bbbb bbbb bbbb bbbb bbbb bbbb bbbb bbbb bbbb bbbb bbbb bbbb