Will Europe’s Historic Artificial Intelligence Law Be a Template for United States?

"On artificial intelligence, trust is a must, not a nice to have," said Margrethe Vestager.

European Commissioner for Competition Margrethe Vestager. Thierry Monasse/Getty Images

The European Union has unveiled sweeping legislation that, if passed, would strictly limit the use of artificial intelligence, or A.I., a relatively recent technology that has garnered widespread use in almost every aspect of modern life and sparked concerns about the great danger to privacy and democracy it could cause if falling in the wrong hands

Sign Up For Our Daily Newsletter

By clicking submit, you agree to our <a href="http://observermedia.com/terms">terms of service</a> and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime.

See all of our newsletters

The EU’s executive branch, the European Commission, released a 108-page draft Wednesday containing rules around the use of A.I. in a range of “high risk” activities for which the U.S. doesn’t yet have clear laws. 

“On artificial intelligence, trust is a must, not a nice to have,” Margrethe Vestager, the Executive Vice President of the European Commission for A Europe Fit for the Digital Age, said in a statement. “With these landmark rules, the EU is spearheading the development of new global norms to make sure A.I. can be trusted.”

Vestager is also the EU’s Commissioner for Competition who in recent years led high-profile antitrust probes into American tech giants, including Facebook, Google and Apple.

Like the EU’s General Data Protection Regulation (GDPR) enacted in 2018, the artificial intelligence regulation is expected to help set a template for the U.S. and governments around the world on regulating emerging technologies.

In the U.S., discussions about regulating A.I. have taken place on both state and federal levels, but few bills have moved through legislature. In 2020, general A.I. bills and resolutions were introduced in at least 13 states, according to the National Conference of State Legislatures. Only one state, Utah, enacted a bill to create a “deep technology talent initiate” within the state’s higher education system.

American tech giants with business in Europe are already gearing up to challenge the EU’s proposed law. A policy analyst at the Center for Data Innovation, a Washington, D.C. think tank funded by several large U.S. tech companies, said the regulation is “a damaging blow to the Commission’s goal of turning the EU into a global A.I. leader” and could cause Europe to “fall even further behind the U.S. and China,” per Forbes.

In any case, it could take years for those proposed rules to become laws. In the EU, new laws must be approved by both the European Parliament and members of the European Council representing the bloc’s 27 national governments.

Here are some of the key points in the proposal:

Strict Rules Around Facial Recognition

Facial recognition is one of the most controversial areas of A.I. application. Under the EU framework, any use of facial recognition and real-time biometric identification in public spaces will be prohibited unless law enforcement needs the tech to tackle public security emergencies, such as preventing a terror attack and finding missing children.

Disclosure Requirement for “High-Risk” A.I. Providers

Companies developing and using high-risk A.I. applications, such as self-driving software, will be required to provide proof of safety and documentation explaining how the technology makes decisions. The companies must also guarantee human oversight in how the applications are created and used.

Software-generated media content, including “deep fake” videos, will be subject to strict transparency disclosure. The creators must notify their users that the content is generated through automated means.

Other “High-Risk” Applications

The proposed legal framework determines an A.I. application’s level of risk based on criteria including intended purpose, the number of potentially affected people, and the irreversibility of harm.

The draft identifies eight categories of high-risk application: including biometric identification, management and operation of critical infrastructure, education, employment, privacy, law enforcement, border control and justice systems.

Heavy Penalty Facing Big Tech

Under the proposal, companies violating the rules could face fines of up to 6 percent of their annual global revenue. For Facebook, that would be up to $5.2 billion based on its 2020 sales. For Google, it would be $11 billion.

Will Europe’s Historic Artificial Intelligence Law Be a Template for United States?