Federal government AI initiatives are subject to three rules

Share via:

As 2024 looks like the year that Apple makes its big push into generative AI, the federal government is also encouraging the use of AI by its own agencies …

However, the White House has today announced that government agencies looking to take advantage of AI must apply three safeguards to mitigate against the potential risks of the technology.

Three rules for Federal government AI initiatives

Engadget notes that Vice President Kamala Harris announced the new policy, which gives federal agencies three requirements when introducing AI initiatives:

  • Ensure safety
  • Be transparent
  • Appoint a chief AI officer

“I believe that all leaders from government, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm while ensuring everyone is able to enjoy its benefits,” the VP told reporters on a press call.

Ensure safety

First, agencies will need to ensure that any AI tools they use “do not endanger the rights and safety of the American people.” They have until December 1 to make sure they have in place “concrete safeguards” to make sure that AI systems they’re employing don’t impact Americans’ safety or rights.

This requirement isn’t limited to physical safety, but also things like maintaining election integrity and voting infrastructure.

One big concern raised about AI systems is that because they learn from what has been done in the past, they can perpetuate systemic bias. Appropriate safeguards are therefore required for AI usage in areas like predictive policing and pre-employment screening.

Be transparent

Federal agencies must disclose the AI systems they are using, with full details made public in most cases.

“Today, President Biden and I are requiring that every year, US government agencies publish online a list of their AI systems, an assessment of the risks those systems might pose and how those risks are being managed,” Harris said.

As part of this effort, agencies will need to publish government-owned AI code, models and data, as long as doing so won’t harm the public or government operations.

Appoint a chief AI officer

Last but not least, federal agencies will need to have internal oversight of their AI use. That includes each department appointing a chief AI officer to oversee all of an agency’s use of AI.

“This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use,” Harris noted. Many agencies will also need to have AI governance boards in place by May 27.

Apple’s relatively slow move into generative AI is almost certainly the result of the company’s own concerns about the potential risks.

Photo by Ana Lanza on Unsplash

FTC: We use income earning auto affiliate links. More.


Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

Federal government AI initiatives are subject to three rules

As 2024 looks like the year that Apple makes its big push into generative AI, the federal government is also encouraging the use of AI by its own agencies …

However, the White House has today announced that government agencies looking to take advantage of AI must apply three safeguards to mitigate against the potential risks of the technology.

Three rules for Federal government AI initiatives

Engadget notes that Vice President Kamala Harris announced the new policy, which gives federal agencies three requirements when introducing AI initiatives:

  • Ensure safety
  • Be transparent
  • Appoint a chief AI officer

“I believe that all leaders from government, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm while ensuring everyone is able to enjoy its benefits,” the VP told reporters on a press call.

Ensure safety

First, agencies will need to ensure that any AI tools they use “do not endanger the rights and safety of the American people.” They have until December 1 to make sure they have in place “concrete safeguards” to make sure that AI systems they’re employing don’t impact Americans’ safety or rights.

This requirement isn’t limited to physical safety, but also things like maintaining election integrity and voting infrastructure.

One big concern raised about AI systems is that because they learn from what has been done in the past, they can perpetuate systemic bias. Appropriate safeguards are therefore required for AI usage in areas like predictive policing and pre-employment screening.

Be transparent

Federal agencies must disclose the AI systems they are using, with full details made public in most cases.

“Today, President Biden and I are requiring that every year, US government agencies publish online a list of their AI systems, an assessment of the risks those systems might pose and how those risks are being managed,” Harris said.

As part of this effort, agencies will need to publish government-owned AI code, models and data, as long as doing so won’t harm the public or government operations.

Appoint a chief AI officer

Last but not least, federal agencies will need to have internal oversight of their AI use. That includes each department appointing a chief AI officer to oversee all of an agency’s use of AI.

“This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use,” Harris noted. Many agencies will also need to have AI governance boards in place by May 27.

Apple’s relatively slow move into generative AI is almost certainly the result of the company’s own concerns about the potential risks.

Photo by Ana Lanza on Unsplash

FTC: We use income earning auto affiliate links. More.


Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

How Indian Dairytech Startups Are Spearheading White Revolution 2.0...

More than five decades ago, India desired to...

HCL Technologies Reports Flat YoY Growth in Q4 Net...

News Update ByStartupStory     |    April 27, 2024 India’s third-largest...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!