US Leads AI Safety with OpenAI, Anthropic Joining National AI Institute

Share via:


Last week, in a one of a kind effort, Open AI signed an MOU with the US Artificial Intelligence Safety Institute (US AISI), part of the larger US Department of Commerce’s National Institute of Standards and Technology. In this juncture of AI’s revolution,  this collaboration is aimed at furthering OpenAI’s commitment for safety, transparency and human centric innovation – by building a framework that the world can contribute to. This would enable the US AI Safety Institute to get early access to test and evaluate future models prior to its public release. Anthropic has also agreed to sign this partnership. 

Sam Altman, CEO of OpenAI, took to X (formerly Twitter), to underscore the significance of this partnership. “We are happy to have reached an agreement with the US AI Safety Institute for pre-rlease testing of our future models,” said Altman, saying that this is important, and suggested for this to happen at a national level. “US needs to continue to lead!”

But Why US AI Safety Institute? 

Elizabeth Kelly, director of the US AI Safety Institute, has been a strong proponent of safety in AI innovation and has brokered many such strategic partnerships in the past. “With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” she said in a statement. 

The US AI Safety Institute was born in 2023, under the Biden-Harris administration, to help develop testing guidelines for safe AI innovation in the US. 

“Safety promotes trust, which promotes adoption, which drives innovation, and that’s what we are trying to promote at the US AI Safety Institute.” she said in another interview, highlighting the role of this institute in the coming future. 

Through initiatives like this, the US could lead the way for more voluntary AI safety adoption. Anthropic, OpenAI rival, has previously collaborated with government bodies – like the UK’s Artificial Intelligence Safety Institute (AISI) – to conduct pre-deployment testing of their models. It would be interesting to see if OpenAI also partners with them too.  

Why it matters? 

The need for the US AI Safety Institute (US AISI) arises from concerns about the impact of poorly managed AI systems on democracy, as highlighted by Dario Amodei, CEO of Anthropic. 

Amodei said that AI must be aligned with human values and ethics to support democratic institutions effectively. The collaboration between Anthropic, OpenAI, and the US AISI is a response to the growing power of AI, which, if left unchecked, could exceed that of national governments and economies. This partnership aims to establish safety standards, conduct pre-deployment testing, and regulate AI to prevent misuse, particularly in politically sensitive areas such as elections.

“I think it’s just really important that we provide these services well. It makes democracy as a whole more effective, and if we provide them poorly, it undermines the notion of democracy,” said Amodei. 

US vs China vs India 

The US’s push for AI safety leadership with OpenAI and Anthropic aims to counter China’s rapid AI advancements and maintain global dominance in ethical AI governance. 

At the same time, there are concerns around China winning the AI race due to its efficient state control, outpacing US efforts hindered by political gridlock. “China is probably going to win the AI game, as their state control is much more efficient than corrupt US politicians,” said Pratik Desai, saying that he wants US and freedom to win. “I just don’t trust the current bunch of politicians.” 

China’s dominance in several AI and technological is quite evident with them leading in “visual AI and LLM field as they have the best state-operated surveillance system,” added Desai. 

On the bright side, standardised scrutiny could promote a more democratic approach to developing models, which is perhaps lacking in economies like China. More countries are slowly realising the importance of AI institutes; and the need for investing in AI safety as much as they do on AI development. 

It’s high time India also considered establishing an AI Safety Institute, akin to those in the UK and US, to responsibly manage the rapid growth of AI technologies. “We need an AI Safety Institute here and now” said former director-general of CSIR, Raghunath Mashelkar, in order to maximise the benefits and minimise the risks associated with AI in the world’s most populous nation.

Former Union Minister of India, Rajeev Chandrasekhar, also underscored the critical need for democracies and their allied nations to shape the future of technology, particularly in light of concerns raised by Paul Buchheit, creator of Gmail, about the potential dangers of AI development led by China, which could lead to global surveillance and censorship. 
“It’s extremely important — more than critical — that the future of tech is shaped by democracies and their partner countries,” said Chandrasekhar.





Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

US Leads AI Safety with OpenAI, Anthropic Joining National AI Institute


Last week, in a one of a kind effort, Open AI signed an MOU with the US Artificial Intelligence Safety Institute (US AISI), part of the larger US Department of Commerce’s National Institute of Standards and Technology. In this juncture of AI’s revolution,  this collaboration is aimed at furthering OpenAI’s commitment for safety, transparency and human centric innovation – by building a framework that the world can contribute to. This would enable the US AI Safety Institute to get early access to test and evaluate future models prior to its public release. Anthropic has also agreed to sign this partnership. 

Sam Altman, CEO of OpenAI, took to X (formerly Twitter), to underscore the significance of this partnership. “We are happy to have reached an agreement with the US AI Safety Institute for pre-rlease testing of our future models,” said Altman, saying that this is important, and suggested for this to happen at a national level. “US needs to continue to lead!”

But Why US AI Safety Institute? 

Elizabeth Kelly, director of the US AI Safety Institute, has been a strong proponent of safety in AI innovation and has brokered many such strategic partnerships in the past. “With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety,” she said in a statement. 

The US AI Safety Institute was born in 2023, under the Biden-Harris administration, to help develop testing guidelines for safe AI innovation in the US. 

“Safety promotes trust, which promotes adoption, which drives innovation, and that’s what we are trying to promote at the US AI Safety Institute.” she said in another interview, highlighting the role of this institute in the coming future. 

Through initiatives like this, the US could lead the way for more voluntary AI safety adoption. Anthropic, OpenAI rival, has previously collaborated with government bodies – like the UK’s Artificial Intelligence Safety Institute (AISI) – to conduct pre-deployment testing of their models. It would be interesting to see if OpenAI also partners with them too.  

Why it matters? 

The need for the US AI Safety Institute (US AISI) arises from concerns about the impact of poorly managed AI systems on democracy, as highlighted by Dario Amodei, CEO of Anthropic. 

Amodei said that AI must be aligned with human values and ethics to support democratic institutions effectively. The collaboration between Anthropic, OpenAI, and the US AISI is a response to the growing power of AI, which, if left unchecked, could exceed that of national governments and economies. This partnership aims to establish safety standards, conduct pre-deployment testing, and regulate AI to prevent misuse, particularly in politically sensitive areas such as elections.

“I think it’s just really important that we provide these services well. It makes democracy as a whole more effective, and if we provide them poorly, it undermines the notion of democracy,” said Amodei. 

US vs China vs India 

The US’s push for AI safety leadership with OpenAI and Anthropic aims to counter China’s rapid AI advancements and maintain global dominance in ethical AI governance. 

At the same time, there are concerns around China winning the AI race due to its efficient state control, outpacing US efforts hindered by political gridlock. “China is probably going to win the AI game, as their state control is much more efficient than corrupt US politicians,” said Pratik Desai, saying that he wants US and freedom to win. “I just don’t trust the current bunch of politicians.” 

China’s dominance in several AI and technological is quite evident with them leading in “visual AI and LLM field as they have the best state-operated surveillance system,” added Desai. 

On the bright side, standardised scrutiny could promote a more democratic approach to developing models, which is perhaps lacking in economies like China. More countries are slowly realising the importance of AI institutes; and the need for investing in AI safety as much as they do on AI development. 

It’s high time India also considered establishing an AI Safety Institute, akin to those in the UK and US, to responsibly manage the rapid growth of AI technologies. “We need an AI Safety Institute here and now” said former director-general of CSIR, Raghunath Mashelkar, in order to maximise the benefits and minimise the risks associated with AI in the world’s most populous nation.

Former Union Minister of India, Rajeev Chandrasekhar, also underscored the critical need for democracies and their allied nations to shape the future of technology, particularly in light of concerns raised by Paul Buchheit, creator of Gmail, about the potential dangers of AI development led by China, which could lead to global surveillance and censorship. 
“It’s extremely important — more than critical — that the future of tech is shaped by democracies and their partner countries,” said Chandrasekhar.





Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Chinese gov't mulls anti-money laundering law to 'monitor' new...

According to the Chinese government, 1,391 individuals have...

Apple re-enables new ‘Tap to Provision’ feature in Apple...

After initially disabling the feature last week, Apple...

OpenAI could shake up its nonprofit structure next year

It’s looking increasingly likely that OpenAI will soon...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!