Far-reaching proposed laws are part of the EU’s expansion of its role as a global tech enforcer

 

The European Union has unveiled strict regulations to govern the use of artificial intelligence, a first-of-its-kind policy that outlines how companies and governments can use the technology.

The draft rules would set limits around the use of artificial intelligence in a range of activities, from self-driving cars to hiring decisions, bank lending, school enrolment selections and the scoring of exams. It would also cover the use of artificial intelligence by law enforcement and court systems — areas considered “high risk” because they could threaten people’s safety or fundamental rights.

Some uses would be banned altogether, including live facial recognition in public spaces, though there would be several exemptions for national security and other purposes.

The rules have far-reaching implications for major technology companies that have poured resources into developing artificial intelligence, including Amazon, Google, Facebook and Microsoft, but also scores of other companies including medicine and healthcare organisations.

Companies that violate the new regulations, which could take several years to move through the European Union policymaking process, could face fines of up to 6 percent of global sales.

“On artificial intelligence, trust is a must, not a nice-to-have,” Margrethe Vestager, the European Commission executive vice president who oversees digital policy for the 27-nation bloc, said in a statement. “With these landmark rules, the E.U. is spearheading the development of new global norms to make sure A.I. can be trusted.”

The European Union regulations would require companies providing artificial intelligence in high-risk areas to provide regulators with proof of its safety, including risk assessments and documentation explaining how the technology is making decisions. The companies must also guarantee human oversight in how the systems are created and used.

Some applications, like chatbots that provide humanlike conversation in customer service situations, and software that creates hard-to-detect manipulated images like “deepfakes,” would have to make clear to users that what they were seeing was computer generated.

For years, the European Union has been the world’s most aggressive watchdog of the technology industry, with other nations often using its policies as blueprints. The bloc has already enacted the world’s most far-reaching data-privacy regulations, and is debating additional antitrust and content-moderation laws.

However, many of the core ideas were leaked in advance of the announcement, prompting concern from the technology community that it could stifle innovation.

Herbert Swaniker, a technology expert at the law firm Clifford Chance, said the proposed hefty fines gave AI regulation much more power – and was “extremely ambitious” in scope. “There’s a lot to do to sharpen some of these concepts,” he said. “The fines are one thing – but how will vendors address the significant costs and human input needed to make compliance a reality?

“The proposals will force vendors to fundamentally rethink how AI is procured and designed.”