NIST launches a new platform to assess generative AI

Share via:

The National Institute of Standards and Technology (NIST), the U.S. Commerce Department agency that develops and tests tech for the U.S. government, companies and the broader public, on Monday announced the launch of NIST GenAI, a new program spearheaded by NIST to assess generative AI technologies including text- and image-generating AI.

The platform is designed to evaluate various forms of generative AI tech. NIST GenAI will release benchmarks, help create “content authenticity” detection (i.e. deepfake-checking) systems, and encourage the development of software to spot the source of fake or misleading information, explains NIST on the newly-launched NIST GenAI site and in a press release.

“The NIST GenAI program will issue a series of challenge problems designed to evaluate and measure the capabilities and limitations of generative AI technologies,” the press release reads. “These evaluations will be used to identify strategies to promote information integrity and guide the safe and responsible use of digital content.”

NIST GenAI’s first project is a pilot study to build systems that can reliably tell the difference between human-created and AI-generated media, starting with text. (While many services purport to detect deepfakes, studies and our own testing have shown them to be unreliable, particularly when it comes to text.) NIST GenAI is inviting teams from academia, industry and research labs to submit either “generators” — AI systems to generate content — or “discriminators,” which are systems that try to identify AI-generated content.

Generators in the study must generate summaries provided a topic and a set of documents, while discriminators must detect whether a given summary is AI-written. To ensure fairness, NIST GenAI will provide the data necessary to train generators and discriminators; systems trained on publicly available data won’t be accepted, including but not limited to open models like Meta’s Llama 3.

Registration for the pilot will begin May 1, with the results scheduled to be published in February 2025.

NIST GenAI’s launch and deepfake-focused study comes as deepfakes grow exponentially.

According to data from Clarity, a deepfake detection firm, 900% more deepfakes have been created this year compared to the same time frame last year. It’s causing alarm, understandably. A recent poll from YouGov found that 85% of Americans said they were concerned about the spread of misleading deepfakes online.

The launch of NIST GenAI is a part of NIST’s response to President Joe Biden’s executive order on AI, which laid out rules requiring greater transparency from AI companies about how their models work and established a raft of new standards, including for labeling content generated by AI.

It’s also the first AI-related announcement from NIST after the appointment of Paul Christiano, a former OpenAI researcher, to the agency’s AI Safety Institute.

Christiano was a controversial choice for his “doomerist” views; he once predicted that “there’s a 50% chance AI development could end in [humanity’s destruction].” Critics, reportedly including scientists within NIST, fear that Cristiano may encourage the AI Safety Institute to focus to “fantasy scenarios” rather than realistic, more immediate risks from AI.

NIST says that NIST GenAI will inform the AI Safety Institute’s work.

Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

NIST launches a new platform to assess generative AI

The National Institute of Standards and Technology (NIST), the U.S. Commerce Department agency that develops and tests tech for the U.S. government, companies and the broader public, on Monday announced the launch of NIST GenAI, a new program spearheaded by NIST to assess generative AI technologies including text- and image-generating AI.

The platform is designed to evaluate various forms of generative AI tech. NIST GenAI will release benchmarks, help create “content authenticity” detection (i.e. deepfake-checking) systems, and encourage the development of software to spot the source of fake or misleading information, explains NIST on the newly-launched NIST GenAI site and in a press release.

“The NIST GenAI program will issue a series of challenge problems designed to evaluate and measure the capabilities and limitations of generative AI technologies,” the press release reads. “These evaluations will be used to identify strategies to promote information integrity and guide the safe and responsible use of digital content.”

NIST GenAI’s first project is a pilot study to build systems that can reliably tell the difference between human-created and AI-generated media, starting with text. (While many services purport to detect deepfakes, studies and our own testing have shown them to be unreliable, particularly when it comes to text.) NIST GenAI is inviting teams from academia, industry and research labs to submit either “generators” — AI systems to generate content — or “discriminators,” which are systems that try to identify AI-generated content.

Generators in the study must generate summaries provided a topic and a set of documents, while discriminators must detect whether a given summary is AI-written. To ensure fairness, NIST GenAI will provide the data necessary to train generators and discriminators; systems trained on publicly available data won’t be accepted, including but not limited to open models like Meta’s Llama 3.

Registration for the pilot will begin May 1, with the results scheduled to be published in February 2025.

NIST GenAI’s launch and deepfake-focused study comes as deepfakes grow exponentially.

According to data from Clarity, a deepfake detection firm, 900% more deepfakes have been created this year compared to the same time frame last year. It’s causing alarm, understandably. A recent poll from YouGov found that 85% of Americans said they were concerned about the spread of misleading deepfakes online.

The launch of NIST GenAI is a part of NIST’s response to President Joe Biden’s executive order on AI, which laid out rules requiring greater transparency from AI companies about how their models work and established a raft of new standards, including for labeling content generated by AI.

It’s also the first AI-related announcement from NIST after the appointment of Paul Christiano, a former OpenAI researcher, to the agency’s AI Safety Institute.

Christiano was a controversial choice for his “doomerist” views; he once predicted that “there’s a 50% chance AI development could end in [humanity’s destruction].” Critics, reportedly including scientists within NIST, fear that Cristiano may encourage the AI Safety Institute to focus to “fantasy scenarios” rather than realistic, more immediate risks from AI.

NIST says that NIST GenAI will inform the AI Safety Institute’s work.

Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Security Bite: Most common macOS malware in 2024 so...

It is a long-standing misconception that Macs are...

Kuo: iPhone 16 Pro to add new rose titanium...

Corroborating previous reports, analyst Ming-Chi Kuo today tweeted...

Hollywood agency CAA aims to help stars manage their...

Creative Artists Agency (CAA), one of the top...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!