It’s been five months since President Biden signed an executive order (EO) to address the rapid development of artificial intelligence. Today, the White House took another step toward implementing the Executive Order, establishing a policy aimed at regulating the federal government’s use of artificial intelligence. Safeguards that agencies must take include ways to mitigate the risk of algorithmic bias.
“I believe that all leaders from government, civil society and the private sector have a moral, ethical and social responsibility to ensure that the adoption and development of artificial intelligence protects the public from potential harm, while ensuring that everyone can enjoy artificial intelligence. The benefits of it. The benefits of it,” Vice President Kamala Harris told reporters at a news conference.
Harris announced three binding requirements under new Office of Management and Budget (OMB) policy. First, agencies need to ensure that any artificial intelligence tools they use “do not endanger the rights and safety of the American people.” They have until December 1 to ensure that “specific safeguards” are in place to ensure that the artificial intelligence systems they use do not affect the safety or rights of Americans. Otherwise, the agency will have to stop using AI products unless its leaders can demonstrate that scrapping the system would have an “unacceptable” impact on critical operations.
Impact on Americans’ Rights and Security
According to the policy, an AI system is considered to affect safety if it is “used or intended to be used under real-world conditions to control or significantly influence the outcome” of certain activities and decisions. These include maintaining the integrity of elections and voting infrastructure; controlling critical safety functions of infrastructure such as water systems, emergency services, and power grids; self-driving vehicles; and in “workplaces, schools, housing, transportation, medical, or law enforcement settings.” Operate the physical movement of the robot.
Agencies must also abandon AI systems that violate Americans’ rights unless they have appropriate safeguards in place or can otherwise justify their use. The policy presumes that rights-impacting purposes include predictive policing; social media monitoring for law enforcement; detecting plagiarism in schools; blocking or restricting protected speech; detecting or measuring human emotions and thoughts; pre-employment screening; and “reproduction of another person’s likeness or voice without explicit consent.”
When it comes to generating artificial intelligence, the policy states that agencies should evaluate the potential benefits. They also need to “establish adequate safeguards and oversight mechanisms to allow the use of generative AI in the agency without undue risk.”
transparency requirements
The second requirement would force agencies to be transparent about the AI systems they use. “Today, President Biden and I are asking U.S. government agencies to publish online annually an inventory of artificial intelligence systems, an assessment of the risks they may pose, and how to manage those risks,” Harris said.
As part of this effort, agencies will be required to release government-owned AI code, models and data, as long as doing so does not harm the public or government operations.If agencies cannot disclose specific AI use cases due to sensitivity reasons, they still need to report metrics
Last but not least, federal agencies need internal oversight of their use of artificial intelligence. This includes appointing a chief AI officer in each department to oversee all agency use of AI. “This is about ensuring the responsible use of artificial intelligence and understanding that we in government must have senior leaders specifically responsible for overseeing the adoption and use of artificial intelligence,” Harris noted. Many agencies will also need to establish artificial intelligence governance by May 27 committee.
The vice president added that prominent figures from the public and private sectors, including civil rights leaders and computer scientists, helped shape the policy along with business leaders and legal scholars.
OMB suggested that as a safeguard, the Transportation Security Administration may have to let air travelers opt out of facial recognition scans without losing their spot in line or facing delays. It also recommends human oversight of things like AI fraud detection and diagnostic decisions in the federal health care system.
As you might imagine, government agencies already use artificial intelligence systems in a variety of ways. The National Oceanic and Atmospheric Administration is developing artificial intelligence models to help it more accurately predict extreme weather, floods and wildfires, while the Federal Aviation Administration is using a system to help manage air traffic in major metropolitan areas to shorten Travel time.
OMB Director Shalanda Young told reporters: “Artificial intelligence poses not only risks but also tremendous opportunities to improve public services and make progress in addressing social challenges such as climate change, improving public health and promoting equitable economic opportunity. “When used responsibly and monitored, artificial intelligence can help agencies reduce wait times for critical government services, thereby improving accuracy and expanding access to essential public services.”
The policy is the latest in a series of efforts to regulate the rapidly growing field of artificial intelligence. While the European Union has adopted a comprehensive set of rules for the use of AI in the bloc and a federal bill is in the pipeline, U.S. efforts to regulate AI have taken more of a patchwork approach at the state level. This month, Utah enacted a law to protect consumers from artificial intelligence scams. In Tennessee, the Securing Sound and Image Likeness Act (technically known as the Elvis Act) is an attempt to protect musicians from deepfakes, which are clones of their voices without permission.