Select your preferred language

By clicking on a specific language, you will be redirected to the local page of the region.

More regional pages are coming online soon.

To explore our full network, visit our Offices page.


Global
Canada
United States
London
Finland
Norway
DateJanuary 30, 2020
ByTzveta Dryanovska
ServiceIntelligence & Technology
SectorPerspectives
Reading time5mins

The European approach on AI – what is on the regulatory horizon

Last December, the clock started ticking on one of the big promises of newly appointed European Commission President Ursula von der Leyen – to deliver European legislation on Artificial Intelligence in her first 100 days in office. As Ms. von der Leyen’s March deadline approaches, a leaked draft document from the Commission sheds light on what AI regulation the EU’s executive branch could be planning and how that would create a “European approach” to the technology.

Most of the media attention surrounding the leaked draft white paper from the Commission on AI has thus far entered on just one of the five regulatory options the EU is mulling – a 3-5-year temporary ban on using AI for facial recognition in public spaces. While that aspect of the paper could have implications for governments – and some businesses – it is still only one of the five regulatory scenarios the Commission is considering. The remaining options for legislation offer key insights on how the European Commission will approach the technology — and which businesses will be affected.

Crucially, at a time when companies from all sectors are undergoing digitalization, future AI legislation will impact businesses across the board and not just the ones typically defined as “technology companies.” For all industry players to best prepare themselves for regulation, it is vital to understand the Commission’s thinking on AI, data and liability.

Liability for products…but also services

Liability has been one of the key questions when it comes to regulating AI. Some of the questions the European Commission has been asking are who is responsible for an AI misbehaving and at which stage of designing or deploying the technology and in which situations this would constitute significant harm to the user.

As a first step, the Commission has noted that the Union already has active legislation on Product Liability that has not been updated since 1985. This legislation could be up for review, unless the European Commission decides that a separate new law is needed on AI specifically.

And while the details of liability regulation are still vague, we know the Commission will address liability for both AI products and services. Specific services the EU already is eyeing are health, transport and financial services.

Measuring risk will go beyond the tech sector

This preferred option for regulation by the Commission would foresee legally binding requirements for developers and users of AI but could only focus on “high risk applications” of AI. The draft white paper outlines a list of criteria for what would constitute a high-risk application and adds that this would be accompanied by a list of high-risk sectors. While the sectors are still undecided, the Commission once again points to the transportation and healthcare sectors as examples.

Considerations include how important the output of the AI is for a user (e.g. whether or not one receives social benefits), the irreversibility of damage (physical harm and death are highlighted by the paper with an example of self-driving vehicles) as well as the ability of the user to choose not to use the service (with an example of how difficult that could be if AI is used in the healthcare sector).

GDPR is still king

When it comes to concerns about privacy and data, many point to the EU’s General Data Protection Regulation (GDPR) and argue against new measures that could further muddle the EU’s complex regime of rules already facing difficulties with implementation.

It seems the Commission agrees. Thus, AI applications that fall in the “low -risk” category would not be subject to mandatory requirements and would need only comply with the GDPR and other existing EU laws.

Additionally, while the 3-5 years moratorium on using AI for facial recognition in public spaces is one of the options the Commission considers, the draft paper clearly states that this would be a far-reaching measure the EU executive prefers not to introduce. In order not to introduce any alarming bans and not to hamper the development of the technology, the paper notes the preferred option would rely on strict implementation of the rules already spelled out in the GDPR – the law that will play a major role in regulation of AI in the future, no matter which regulatory scenario the Commission ends up picking.

Next steps

The final version of the white paper will be officially presented on February 19. Shortly after, the Commission will begin the process of public consultations: a key period for stakeholders who wish to shape the final legislative text and one that could take up a full year. While the long process of adopting the legislation in the European Parliament and Council might not start until early 2021, businesses that could be affected (particularly in the tech, industrial, transport and health sector) should keep a close eye on the process and engage earlier rather than later.