The EU Artificial Intelligence Act moves forward

 

As artificial intelligence (“AI”) technologies continue to proliferate across industries and society, there is a growing consensus among legal scholars and policymakers that regulation is necessary to mitigate the risks posed by AI. Thus, on 21 April 2021, the European Commission adopted a proposal for a Regulation, laying down harmonized rules on artificial intelligence (the “Artificial Intelligence Act”) to improve the functioning of the internal market by setting forth a uniform legal framework for the development, marketing, and use of artificial intelligence in conformity with the values of the European Union.

On 11 May 2023, the European Parliament’s Internal Market Committee and the Civil Liberties Committee approved the proposal for the Artificial Intelligence Act and successfully adopted a draft negotiating mandate on the latter (hereinafter the “MEPs’ draft negotiating mandate”), suggesting certain amendments that aim to ensure safe, transparent, traceable, non-discriminatory, and environmentally friendly AI systems. The MEPs have proposed that the Artificial Intelligence Act refers to a uniform definition of AI that is technology-neutral and intended to apply to both present-day and future AI systems. In particular, foundation models (including generative AI, such as ChatGPT) are now in the scope of the proposed latest draft Regulation.

The rules of the Artificial Intelligence Act follow a risk-based approach by separating AI applications into the following risk categories:

i) The first category comprises these applications and systems that create an unacceptable risk to people’s safety, including government-run social scoring, classifying people based on their social behavior, socio-economic status, and personal characteristics. The AI applications falling under this category would be strictly prohibited.

ii) The second category comprises high-risk applications, which would be subject to specific legal requirements. AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety, and fundamental rights of persons in the EU. The MEPs’ draft negotiating mandate strives to expand the classification of high-risk areas by including, among others, AI in recommender systems used by social media platforms.

iii) The third category comprises limited or minimal risk systems (AI systems with specific transparency obligations).

iv) Low-risk systems. The AI applications falling under this category would remain unregulated.

The Artificial Intelligence Act will apply to the utilization of AI by companies, as well as in the public sector and law enforcement. It will work alongside other laws, including the General Data Protection Regulation. According to the proposed amendments under the MEPs’ draft negotiating mandate, the most serious breaches will be subject to fines of up to € 40 million or 7% of global profits, whichever is higher.

The MEPs’ draft negotiating mandate will be voted on during the European Parliament’s session taking place between 12-15 June 2023. The endorsement of the mandate by the latter will be followed by a trilogue between representatives of the European Parliament, the Council of the European Union, and the European Commission. The adoption of the EU Artificial Intelligence Act is anticipated to eventuate near the end of 2023. It is expected that a grace period of around two years will be provided to impacted parties to allow them to comply with the new Regulation.

You can read a proposal for a Regulation, laying down harmonized rules on artificial intelligence here:

https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206