California passes controversial bill regulating AI model training

Share via:


As the world debates what is right and what is wrong about generative AI, the California State Assembly and Senate have just passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act bill (SB 1047), which is one of the first significant regulations for AIs in the United States.

California wants to regulate AIs with new bill

The bill, which was voted on Thursday (via The Verge), has been the subject of debate in Silicon Valley as it essentially mandates that AI companies operating in California implement a series of precautions before training a “sophisticated foundation model.”

With the new law, developers will have to make sure that they can quickly and completely shut down an AI model if it is deemed unsafe. Language models will also need to be protected against “unsafe post-training modifications” or anything that could cause “critical harm.” Senators describe the bill as “safeguards to protect society” from the misuse of AI.

Professor Hinton, former AI lead at Google, praised the bill for considering that the risks of powerful AI systems are “very real and should be taken extremely seriously.”

However, companies like OpenAI and even small developers have criticized the AI safety bill, as it establishes potential criminal penalties for those who don’t comply. Some argue that the bill will harm indie developers, who will need to hire lawyers and deal with bureaucracy when working with AI models.

Governor Gavin Newsom now has until the end of September to decide whether to approve or veto the bill.

Apple and other companies commit to AI safety rules

Apple Intelligence | OpenAI ChatGPT | Google Gemini | AI

Earlier this year, Apple and other tech companies such as Amazon, Google, Meta, and OpenAI agreed to a set of voluntary AI safety rules established by the Biden administration. The safety rules outline commitments to test behavior of AI systems, ensuring they do not exhibit discriminatory tendencies or have security concerns.

The results of conducted tests must be shared with governments and academia for peer review. At least for now, the White House AI guidelines are not enforceable in law.

Apple, of course, has a keen interest in such regulations as the company has been working on Apple Intelligence features, which will be released to the public later this year with iOS 18.1 and macOS Sequoia 15.1.

It’s worth noting that Apple Intelligence features require an iPhone 15 Pro or later, or iPads and Macs with the M1 chip or later.

FTC: We use income earning auto affiliate links. More.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

California passes controversial bill regulating AI model training


As the world debates what is right and what is wrong about generative AI, the California State Assembly and Senate have just passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act bill (SB 1047), which is one of the first significant regulations for AIs in the United States.

California wants to regulate AIs with new bill

The bill, which was voted on Thursday (via The Verge), has been the subject of debate in Silicon Valley as it essentially mandates that AI companies operating in California implement a series of precautions before training a “sophisticated foundation model.”

With the new law, developers will have to make sure that they can quickly and completely shut down an AI model if it is deemed unsafe. Language models will also need to be protected against “unsafe post-training modifications” or anything that could cause “critical harm.” Senators describe the bill as “safeguards to protect society” from the misuse of AI.

Professor Hinton, former AI lead at Google, praised the bill for considering that the risks of powerful AI systems are “very real and should be taken extremely seriously.”

However, companies like OpenAI and even small developers have criticized the AI safety bill, as it establishes potential criminal penalties for those who don’t comply. Some argue that the bill will harm indie developers, who will need to hire lawyers and deal with bureaucracy when working with AI models.

Governor Gavin Newsom now has until the end of September to decide whether to approve or veto the bill.

Apple and other companies commit to AI safety rules

Apple Intelligence | OpenAI ChatGPT | Google Gemini | AI

Earlier this year, Apple and other tech companies such as Amazon, Google, Meta, and OpenAI agreed to a set of voluntary AI safety rules established by the Biden administration. The safety rules outline commitments to test behavior of AI systems, ensuring they do not exhibit discriminatory tendencies or have security concerns.

The results of conducted tests must be shared with governments and academia for peer review. At least for now, the White House AI guidelines are not enforceable in law.

Apple, of course, has a keen interest in such regulations as the company has been working on Apple Intelligence features, which will be released to the public later this year with iOS 18.1 and macOS Sequoia 15.1.

It’s worth noting that Apple Intelligence features require an iPhone 15 Pro or later, or iPads and Macs with the M1 chip or later.

FTC: We use income earning auto affiliate links. More.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

5 big reasons to attend TechCrunch Disrupt 2024

As we approach the final stretch before TechCrunch’s...

Levy Health wants to help women identify fertility issues...

Caroline Mitterdorfer’s fertility journey began with an early-stage...

CRED Poaches Slice’s Arvind Kathpalia As Risk Advisor

SUMMARY Kathpalia joined Slice in May to support its...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!