The US government is taking potential AI dangers more seriously, following its decision to create an Artificial Intelligence Safety Institute Consortium (AISIC) earlier this year, with Apple as a member.
A proposed new law would outlaw the use of deepfakes, and a government body will be carrying out safety checks on the next version of ChatGPT before it is released to the public …
AI safety guidelines already in place
The Biden administration proposed a set of safety guidelines for new AI products and services earlier this year, after they were announced by VP Kamala Harris.
While abiding by the guidelines is (for now) voluntary, all of the tech giants involved in AI development have agreed to do so. This includes Apple, Amazon, Google, Meta, OpenAI, and Microsoft.
NO FAKES Act would make deepfakes illegal
One of the biggest concerns about AI is the ease with which deepfakes can be created. These are convincing-looking photos, audio, and video recordings of real people made to look like they are doing or saying completely fake things.
Deepfakes have been used to create non-consensual fake nudes of everyone from celebrities to schoolgirls, and to create damaging fake video footage of politicians. One recent example was a fake ad in which Kamala Harris appeared to say she was a diversity hire and a “deep state puppet.”
A bipartisan group of Senators yesterday introduced the NO FAKES Act, which would make it illegal to create deepfakes of real people without their consent.
The NO FAKES Act would hold individuals or companies liable for damages for producing, hosting, or sharing a digital replica of an individual performing in an audiovisual work, image, or sound recording that the individual never actually appeared in or otherwise approved – including digital replicas created by generative artificial intelligence (AI). An online service hosting the unauthorized replica would have to take down the replica upon notice from a right holder.
This wouldn’t necessarily entirely eliminate the problem where politicians are concerned, as there would be a first amendment exclusion for parodies, but it should significantly reduce the problem if passed.
Government will vet the next version of ChatGPT
OpenAI CEO Sam Altman has tweeted that the company has agreed to give AISIC early access to the next ChatGPT model, so that it can be vetted for safety concerns before a public release.
Our team has been working with the US AI Safety Institute on an agreement where we would provide early access to our next foundation model so that we can work together to push forward the science of AI evaluations. Excited for this!
Photo by BoliviaInteligente on Unsplash
FTC: We use income earning auto affiliate links. More.