US government tackling AI dangers, with deepfake ban and more

Share via:


The US government is taking potential AI dangers more seriously, following its decision to create an Artificial Intelligence Safety Institute Consortium (AISIC) earlier this year, with Apple as a member.

A proposed new law would outlaw the use of deepfakes, and a government body will be carrying out safety checks on the next version of ChatGPT before it is released to the public …

AI safety guidelines already in place

The Biden administration proposed a set of safety guidelines for new AI products and services earlier this year, after they were announced by VP Kamala Harris.

While abiding by the guidelines is (for now) voluntary, all of the tech giants involved in AI development have agreed to do so. This includes Apple, Amazon, Google, Meta, OpenAI, and Microsoft.

NO FAKES Act would make deepfakes illegal

One of the biggest concerns about AI is the ease with which deepfakes can be created. These are convincing-looking photos, audio, and video recordings of real people made to look like they are doing or saying completely fake things.

Deepfakes have been used to create non-consensual fake nudes of everyone from celebrities to schoolgirls, and to create damaging fake video footage of politicians. One recent example was a fake ad in which Kamala Harris appeared to say she was a diversity hire and a “deep state puppet.”

A bipartisan group of Senators yesterday introduced the NO FAKES Act, which would make it illegal to create deepfakes of real people without their consent.

The NO FAKES Act would hold individuals or companies liable for damages for producing, hosting, or sharing a digital replica of an individual performing in an audiovisual work, image, or sound recording that the individual never actually appeared in or otherwise approved – including digital replicas created by generative artificial intelligence (AI). An online service hosting the unauthorized replica would have to take down the replica upon notice from a right holder.

This wouldn’t necessarily entirely eliminate the problem where politicians are concerned, as there would be a first amendment exclusion for parodies, but it should significantly reduce the problem if passed.

Government will vet the next version of ChatGPT

OpenAI CEO Sam Altman has tweeted that the company has agreed to give AISIC early access to the next ChatGPT model, so that it can be vetted for safety concerns before a public release.

Our team has been working with the US AI Safety Institute on an agreement where we would provide early access to our next foundation model so that we can work together to push forward the science of AI evaluations. Excited for this!

Photo by BoliviaInteligente on Unsplash

FTC: We use income earning auto affiliate links. More.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

US government tackling AI dangers, with deepfake ban and more


The US government is taking potential AI dangers more seriously, following its decision to create an Artificial Intelligence Safety Institute Consortium (AISIC) earlier this year, with Apple as a member.

A proposed new law would outlaw the use of deepfakes, and a government body will be carrying out safety checks on the next version of ChatGPT before it is released to the public …

AI safety guidelines already in place

The Biden administration proposed a set of safety guidelines for new AI products and services earlier this year, after they were announced by VP Kamala Harris.

While abiding by the guidelines is (for now) voluntary, all of the tech giants involved in AI development have agreed to do so. This includes Apple, Amazon, Google, Meta, OpenAI, and Microsoft.

NO FAKES Act would make deepfakes illegal

One of the biggest concerns about AI is the ease with which deepfakes can be created. These are convincing-looking photos, audio, and video recordings of real people made to look like they are doing or saying completely fake things.

Deepfakes have been used to create non-consensual fake nudes of everyone from celebrities to schoolgirls, and to create damaging fake video footage of politicians. One recent example was a fake ad in which Kamala Harris appeared to say she was a diversity hire and a “deep state puppet.”

A bipartisan group of Senators yesterday introduced the NO FAKES Act, which would make it illegal to create deepfakes of real people without their consent.

The NO FAKES Act would hold individuals or companies liable for damages for producing, hosting, or sharing a digital replica of an individual performing in an audiovisual work, image, or sound recording that the individual never actually appeared in or otherwise approved – including digital replicas created by generative artificial intelligence (AI). An online service hosting the unauthorized replica would have to take down the replica upon notice from a right holder.

This wouldn’t necessarily entirely eliminate the problem where politicians are concerned, as there would be a first amendment exclusion for parodies, but it should significantly reduce the problem if passed.

Government will vet the next version of ChatGPT

OpenAI CEO Sam Altman has tweeted that the company has agreed to give AISIC early access to the next ChatGPT model, so that it can be vetted for safety concerns before a public release.

Our team has been working with the US AI Safety Institute on an agreement where we would provide early access to our next foundation model so that we can work together to push forward the science of AI evaluations. Excited for this!

Photo by BoliviaInteligente on Unsplash

FTC: We use income earning auto affiliate links. More.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Mixed Week For New-Age Tech Stocks Amid Market Volatility

The ongoing earnings season, presidential elections in the...

tcs: Fog lifts; optimism now in the air

After witnessing a hazy macro environment for five...

OpenAI reportedly developing new strategies to deal with AI...

OpenAI’s next flagship model might not represent as...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!