An ethics framework for AI

January 2021

As different forms of artificial intelligence start to be seen across multiple sectors, including financial services and transport, policymakers are debating the ethical and regulatory requirements needed to protect consumers while promoting innovation.

AI woman against city background

Keen focus of regulators on AI

The legal framework surrounding artificial intelligence has been a prominent discussion point in Brussels over the last year. When Ursula Von der Leyen’s Commission took office, they placed the need to develop a framework for AI the top of their agenda, committing to issuing a legislative proposal on a coordinated EU approach to AI within their first 100 days in office.

In 2020, the European Commission published a number of regulatory proposals aimed at creating, and in some cases clarifying, rules on the use of AI. Perhaps the most important of these was their highly anticipated White Paper on AI, issued at the beginning of last year, which set out the various policy options the Commission was considering when it comes to regulating AI. The Commission also published the Digital Services Act, which would require platforms to disclose to relevant parties (such as leasing and rental companies) how their algorithms work. These proposals were accompanied by a number of other initiatives that include some regulation the use of AI, namely the European Digital Strategy, European Strategy for Data, Digital Market Act and Data Governance Act. You can read about these in our dedicated article on the topic.

White Paper on AI

Parliament assesses ethical elements

Last summer, the European Parliament set up a dedicated committee focused on AI, the Committee on Artificial Intelligence in a Digital Age (AIDA) Committee. This Committee, in addition to scrutinising the policy initiatives outlined above, adopted a proposal on an Ethics Framework for AI. This important framework sets out guiding principles on future laws relating to AI, essentially establishing the ethical principles and legal obligations that should be used when developing, deploying, and using AI, robotics and related technologies in the EU, including software, algorithms and data use.

This framework stresses the need for a “human centric” approach to AI (something policy makers have been advocating for since the beginning of these discussions on AI),  and the need for AI applications to be “human-made”. The framework also sets out rules related to:

  • Transparency and accountability
  • Safeguards against bias and discrimination
  • Right to redress (where an individual is “harmed” in someway by an AI application)
  • Social and environmental responsibility
  • Respect for privacy in data protection

European Parliament framework

What is still to come?

In March, the European Commission is expected to publish proposed rules on high-risk applications of AI (including self-driving cars for example). This will be followed by rules clarifying liability in the context of AI later this year.