As 2024 looks like the year that Apple makes its big push into generative AI, the federal government is also encouraging the use of AI by its own agencies …
However, the White House has today announced that government agencies looking to take advantage of AI must apply three safeguards to mitigate against the potential risks of the technology.
Three rules for Federal government AI initiatives
Engadget notes that Vice President Kamala Harris announced the new policy, which gives federal agencies three requirements when introducing AI initiatives:
- Ensure safety
- Be transparent
- Appoint a chief AI officer
“I believe that all leaders from government, civil society and the private sector have a moral, ethical and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm while ensuring everyone is able to enjoy its benefits,” the VP told reporters on a press call.
Ensure safety
First, agencies will need to ensure that any AI tools they use “do not endanger the rights and safety of the American people.” They have until December 1 to make sure they have in place “concrete safeguards” to make sure that AI systems they’re employing don’t impact Americans’ safety or rights.
This requirement isn’t limited to physical safety, but also things like maintaining election integrity and voting infrastructure.
One big concern raised about AI systems is that because they learn from what has been done in the past, they can perpetuate systemic bias. Appropriate safeguards are therefore required for AI usage in areas like predictive policing and pre-employment screening.
Be transparent
Federal agencies must disclose the AI systems they are using, with full details made public in most cases.
“Today, President Biden and I are requiring that every year, US government agencies publish online a list of their AI systems, an assessment of the risks those systems might pose and how those risks are being managed,” Harris said.
As part of this effort, agencies will need to publish government-owned AI code, models and data, as long as doing so won’t harm the public or government operations.
Appoint a chief AI officer
Last but not least, federal agencies will need to have internal oversight of their AI use. That includes each department appointing a chief AI officer to oversee all of an agency’s use of AI.
“This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use,” Harris noted. Many agencies will also need to have AI governance boards in place by May 27.
Apple’s relatively slow move into generative AI is almost certainly the result of the company’s own concerns about the potential risks.
Photo by Ana Lanza on Unsplash
FTC: We use income earning auto affiliate links. More.