EU lawmakers have formally approved the EU’s landmark artificial intelligence regulations, paving the way for the bloc to ban certain uses of the technology and require transparency from providers. In a majority vote on Wednesday, 523 European Parliament members were elected to formally adopt the Artificial Intelligence Act (AI Act) and commit to its implementation and implementation.
The AI Bill has been hotly debated since it was first proposed in 2021, with some of its most stringent regulations, such as a proposed blanket ban on biometric systems used for mass public surveillance, softened by last-minute compromises.While Wednesday’s announcement means the law has almost passed its final hurdle, it will still take some time Year in order to enforce certain rules.
The legal language of the text is still awaiting final approval, either through a separate announcement or a plenary vote on 10/11 April, with the AI Bill set to enter into force 20 days after its publication in the Official Gazette – expected to occur in May this year month or June. The rules will then come into effect in stages: Countries will have six months to ban banned AI systems, 12 months to enforce rules for “general AI systems” such as chatbots, and 12 months for “advanced AI systems” designated by the law as The artificial intelligence system of “Intelligent System” has a period of up to 36 months. risk. “
Prohibited systems include social scoring, emotion recognition at work or school, or systems designed to influence behavior or exploit user vulnerabilities. Examples of “high-risk” AI systems include systems used in critical infrastructure, education and job training, certain law enforcement systems, and systems that can be used to influence democratic processes such as elections.
Paul Barrett said: “In the short term, the compromise of the EU artificial intelligence bill will not have much direct impact on established artificial intelligence designers in the United States, because under its terms, it may not take effect until 2025.” New York. Deputy Director of the University’s Stern Center for Business and Human Rights, as early as December 2023, the EU tentatively reached an agreement on landmark artificial intelligence regulations. So for now, major AI players such as OpenAI, Microsoft, Google and Meta will likely continue to compete for dominance, especially as they deal with U.S. regulatory uncertainty, Barrett said.
The Artificial Intelligence Act was launched before the explosion of general artificial intelligence (GPAI) tools such as OpenAI’s GPT-4 large language model, and their regulation became a very complicated sticking point in the last-minute discussions. The bill divides the rules according to the level of risk that artificial intelligence systems pose to society, or, as the EU put it in a statement, “the higher the risk, the stricter the rules.”
But some member states are increasingly concerned that such stringency could make the EU an unattractive market for artificial intelligence. France, Germany and Italy all lobbied for reduced restrictions on GPAI during the negotiations. They won compromises that included limits on systems that could be deemed “high risk” and would be subject to some of the strictest rules. Instead of classifying all GPAI as high risk, a two-tier system and enforcement exceptions will be established for outright prohibited uses of AI, such as remote biometrics.
This still didn’t satisfy all critics. French President Emmanuel Macron has attacked the rules and said the Artificial Intelligence Bill creates a strict regulatory environment that hinders innovation. Barrett said some new European AI companies may find it challenging to raise capital under current rules, giving U.S. companies an advantage. Companies outside Europe may even choose to avoid setting up shop in the region or block access to platforms so that they are not fined for breaking the rules – a potential risk that Europe faces when following regulations in non-AI technology industries such as Digital Market Law” and “Digital Services Law”.
But these rules also sidestep some of the most controversial issues surrounding generative artificial intelligence.
For example, artificial intelligence models trained on publicly available but sensitive and potentially copyrightable material have become a major source of debate among organizations. However, the approved rules do not create new laws around data collection. Although the EU took the lead in enacting data protection laws through GDPR, its artificial intelligence rules do not prohibit companies from collecting information, but only require companies to follow GDPR guidelines.
“Under the regulations, companies may have to provide transparency summaries or data nutrition labels,” Susan Ariel Aronson, director of the Center for Digital Trade and Data Governance and a research professor of international affairs at George Washington University, said when the EU provisionally approved the bill. rule. “But that doesn’t really change companies’ behavior around data.”
Aaronson noted that the AI Act still clarifies how companies should handle copyrighted material in model training materials, except that developers should follow existing copyright laws (which leaves a lot of gray areas around AI). Therefore, it provides no incentive for AI model developers to avoid using copyrighted material.
The Artificial Intelligence Act would also not impose potentially stiff fines on open source developers, researchers, and smaller companies further down the value chain—a decision that has been praised by open source developers in the field. Shelley McKinley, GitHub’s chief legal officer, said this is “a positive development for open innovation and developers committed to helping solve some of society’s most pressing problems.” (GitHub is a popular open source development hub and is a subsidiary of Microsoft.)
Observers believe the most concrete impact may be to force other politicians, especially U.S. policymakers, to act more quickly. This isn’t the first major regulatory framework for AI – in July, China passed guidelines for companies looking to sell AI services to the public. However, the EU’s relatively transparent and controversial development process has given the artificial intelligence industry something to look forward to. Aronson said the interim text – which has now been approved – at least showed that the EU had listened and responded to public concerns about the technology.
The fact that it builds on existing data rules may also encourage governments to review the regulations they already have in place, said Lothar Determann, data privacy and IT partner at law firm Baker McKenzie. Blake Brannon, chief strategy officer of data privacy platform OneTrust, said that more mature artificial intelligence companies will formulate privacy protection guidelines in accordance with laws such as GDPR and expect more stringent policies. He said that depending on the company, the Artificial Intelligence Act is a “complementary” to existing strategies.
In comparison, although the United States has major players such as Meta, Amazon, Adobe, Google, Nvidia and OpenAI, it has basically failed to implement artificial intelligence regulation. In its biggest move so far, the Biden administration has issued an executive order directing government agencies to develop safety standards based on voluntary, non-binding agreements signed by major artificial intelligence companies. The handful of bills introduced in the Senate center around deepfakes and watermarks, while a closed artificial intelligence forum hosted by Sen. Chuck Schumer (D-N.Y.) provided little clarity on the government’s direction in regulating the technology.
Policymakers can now consider and learn from the EU’s approach
This does not mean that the United States will take the same risk-based approach, but it may seek to expand data transparency rules or allow the GPAI model to be more relaxed.
Navrina Singh, founder of Credo AI and member of the National Artificial Intelligence Advisory Committee, believes that although the Artificial Intelligence Bill is an important moment for artificial intelligence governance, the situation will not change quickly and there is still a lot of work to be done in the future.
“The focus of regulators on both sides of the Atlantic should be on assisting organizations of all sizes to safely design, develop and deploy artificial intelligence with transparency and accountability,” Singer said. edge in December. She added that standards and benchmarking processes are still lacking, particularly in terms of transparency.
The bill will not retroactively regulate existing models or applications, but future versions of OpenAI’s GPT, Meta’s Llama or Google’s Gemini will need to take into account transparency requirements set by the EU. It may not make a huge difference overnight, but it signals the EU’s stance on artificial intelligence.
Updated March 12, 8:30 a.m.: The original article was updated following the formal adoption of the EU bill.