Grant Gross
Senior Writer

White House requires agencies to create AI safeguards, appoint CAIOs

News
Mar 28, 20245 mins
Artificial IntelligenceGenerative AIGovernment IT

A new OMB policy focuses on maintaining public safety and protecting human rights as the federal government begins to embrace AI.

US government agencies will need to provide human oversight to AI models that make critical decisions about healthcare, employment, and other critical issues affecting people to comply with a new policy from the White House Office of Management and Budget (OMB).

The AI use policy, announced Thursday, requires agencies to hire chief AI officers and to put safeguards in place to protect human rights and maintain public safety.

“While AI is improving operations and service delivery across the Federal Government, agencies must effectively manage its use,” the policy says. “With appropriate safeguards in place, AI can be a helpful tool for modernizing agency operations and improving Federal Government service to the public.”

The 34-page policy requires most agencies, excepting the Department of Defense and intelligence agencies, to inventory their AI use annually. Agencies must also continually monitor their AI use.

The OMB policy will, for example, allow airline travelers to opt out of the use of the Transportation Security Administration’s (TSA’s) use of facial recognition software, according to a fact sheet issued with the policy.

Other examples from the fact sheet: When AI is used in the federal healthcare system to support diagnostic decisions, a human will be required to verify the AI’s results. When an AI is used to detect fraud in government services, a human will be required to review the results, and affected people will be able to seek remedies for any harm the AI created.

AI’s impact on public safety

The policy defines several uses of AI that could impact public safety and human rights, and it requires agencies to put safeguards in place by Dec. 1. The safeguards must include ways to mitigate the risks of algorithmic discrimination and provide the public with transparency into government AI use.

Agencies must stop using AIs that can’t meet the safeguards. They must notify the public of any AI exempted from complying with the OMB policy and explain the justifications.

AIs that control dams, electrical grids, traffic control systems, vehicles, and robotic systems within workplaces fall under safety-impacting AIs. Meanwhile, AIs that block or remove protected speech, produce risk assessments of individuals for law enforcement agencies, and conduct biometric identification are classified as rights-impacting. AI decisions about healthcare, housing, employment, medical diagnosis, and immigration status also fall into the rights-impacting category.

The OMB policy also calls on agencies to release government-owned AI code, models, and data, when the releases do not pose a risk to the public or government operations.

The new policy received mixed reviews from some human rights and digital rights groups. The American Civil Liberties Union called the policy an important step toward protecting US residents against AI abuses. But the policy has major holes in it, including broad exceptions for national security systems and intelligence agencies, the ACLU noted. The policy also has exceptions for sensitive law enforcement information.

“Federal uses of AI should not be permitted to undermine rights and safety, but harmful and discriminatory uses of AI by national security agencies, state governments, and more remain largely unchecked,” Cody Venzke, senior policy counsel with the ACLU, said in a statement. “Policymakers must step up to fill in those gaps and create the protections we deserve.”  

Congressional action is also needed because the OMB policy doesn’t apply to private industry, added Nick Garcia, policy counsel at Public Knowledge, a digital rights group.

“This is an instance of the federal government leading by example, and it shows what we should expect of those with the power and resources to ensure that AI technology is used safely and responsibly,” he said.

Leading by example

Garcia said he hopes the standards and practices for responsible AI use in the OMB policy will serve as a “good testing ground” for future rules. “We need significant government support for public AI research and resources, a comprehensive privacy law, stronger antitrust protections, and an expert regulatory agency with the resources and authority to keep up with the pace of innovation,” he added.

However, another digital rights group, the Center for Democracy and Technology, applauded the OMB policy, saying it allows the federal government to lead by example. The policy also will give the federal government a consistent set of AI rules and will give the public transparency about government AI use, the CDT said.

While much of the policy’s focus is on safety and human rights, it also encourages government agencies to explore responsible use of AI, noted Kevin Smith, chief product officer at DISCO, an AI-powered legal technology company.

The OMB’s approach differs from the EU’s AI Act, which leans more into the risks of AI, Smith said. “The OMB’s approach encourages agencies to adopt AI on their own terms, allowing for risk assessment, reporting, and accountability,” he added.

The OMB policy follows an October executive order from President Joe Biden outlining safe AI use.

“This next step is akin to encouragement with transparency,” Smith said. “The administration didn’t set itself up to fail or unnecessarily curtail innovative thinking, which was smart considering how rapidly AI is advancing.”

Grant Gross
Senior Writer

Grant Gross, a senior writer at CIO, is a long-time technology journalist. He previously served as Washington correspondent and later senior editor at IDG News Service. Earlier in his career, he was managing editor at Linux.com and news editor at tech careers site Techies.com. In the distant past, he worked as a reporter and editor at newspapers in Minnesota and the Dakotas.

More from this author