OpenAI buffs safety team and gives board veto power on risky AI

Share via:

 ​

OpenAI is expanding its internal safety processes to fend off the threat of harmful AI. A new “safety advisory group” will sit above the technical teams and make recommendations to leadership, and the board has been granted veto power — of course, whether it will actually use it is another question entirely. Normally the ins […]

© 2023 TechCrunch. All rights reserved. For personal use only.

OpenAI is expanding its internal safety processes to fend off the threat of harmful AI. A new “safety advisory group” will sit above the technical teams and make recommendations to leadership, and the board has been granted veto power — of course, whether it will actually use it is another question entirely. Normally the ins
© 2023 TechCrunch. All rights reserved. For personal use only.  

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

OpenAI buffs safety team and gives board veto power on risky AI

 ​

OpenAI is expanding its internal safety processes to fend off the threat of harmful AI. A new “safety advisory group” will sit above the technical teams and make recommendations to leadership, and the board has been granted veto power — of course, whether it will actually use it is another question entirely. Normally the ins […]

© 2023 TechCrunch. All rights reserved. For personal use only.

OpenAI is expanding its internal safety processes to fend off the threat of harmful AI. A new “safety advisory group” will sit above the technical teams and make recommendations to leadership, and the board has been granted veto power — of course, whether it will actually use it is another question entirely. Normally the ins
© 2023 TechCrunch. All rights reserved. For personal use only.  

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Post Hacking SC Temporarily Takes Down YouTube Account

SUMMARY The Supreme Court’s official YouTube channel was hacked...

Aethir partners with Filecoin to solve GPU shortage, boost...

Aethir and Filecoin’s partnership integrates GPU leasing services...

Exclusive deal on all Nomad iPhone 16 cases for...

It wasn’t easy, but we have managed to...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!