Further to recent developments in the digital arena with the DMA and DSA, the EU strikes again with the AI Act, the world’s first comprehensive AI law to regulate the use of artificial intelligence in the EU.
On 08 December 2023, European Union policymakers agreed to a new law to regulate artificial intelligence as part of the EU’s digital strategy.1 Although the Act still needs to go through a few final steps for approval, the political agreement means its key outlines are set. Talks on the law’s final form will begin with EU countries in the Council. These negotiations are expected to conclude by the end of this year, but implementation of the Act will take six to thirty-six months.
The scope of the new AI Act
The AI Act2 sets a new global benchmark for countries seeking to harness the potential benefits of the technology while trying to protect against its possible risks, like automating jobs, spreading misinformation online and endangering national security.
The EU’s priority is to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly; it also aims to guarantee that AI systems are overseen by people rather than by automation to prevent harmful outcomes.
What do we know so far?
The new law will apply to all AI system providers, manufacturers and deployers (or users). The Act is industry-agnostic and used across all sectors and industries.
Like other EU regulations, the AI Act will have an extraterritorial scope, which means it can apply to companies based inside and outside the EU as long as the systems or their outputs are based in the EU.
The AI Act proposes a risk-based model with rules applicable to different risk levels.
| Level | Obligations | Examples of AI |
|---|---|---|
| Unacceptable risk | Considered a threat to people and, therefore, banned. These include: Cognitive behavioural manipulation of people or specific vulnerable groups. Social scoring classifies people based on behaviour, socio-economic status or personal characteristics. Real-time and remote biometric identification systems. | Facial or emotion recognition systems. |
| High risk | These are AI systems that negatively affect safety or fundamental rights. All high-risk AI systems will be and will need to be assessed before being put on the market and throughout their lifecycle. High-risk AI systems can be divided into two categories: 1) AI systems used in products falling under the EU’s product safety legislation. For example, toys, aviation, cars, medical devices and lifts. 2) AI systems falling into eight specific areas that will have to be registered in an EU database: *Biometric identification and categorisation of natural persons *Management and operation of critical infrastructure *Education and vocational training *Employment, worker management and access to self-employment *Access to and enjoyment of essential private services and public services and benefits *Law enforcement *Migration, asylum and border control management *Assistance in legal interpretation and application of the law. | HR recruitment tools, creditworthiness evaluations |
| Generative AI | Generative AI is also considered “High risk” but is regarded as a separate classification because specific rules apply to this type of AI. In particular, Generative AI will be subject to transparency requirements: Disclosing that AI generated the content. Designing the model to prevent it from developing illegal content. Publishing summaries of copyrighted data used for training. | Chat GPT, Bard. |
| Limited risk | Limited-risk AI systems should comply with minimal transparency requirements, allowing users to make informed decisions. Users should be made aware when they are interacting with AI and be able to decide whether they want to continue using it. | Deepfakes, chatbots, etc. |
| Minimal risk | The EU will not regulate AI systems in this category at this stage. However, the EU will likely draft voluntary codes of conduct to promote best practices. | Email spam filters. |
What are the consequences of non-compliance?
The proposal requires Member States to designate a national supervisory authority to supervise the application and implementation of the regulation.3
Moreover, an AI Office within the Commission will be set up to oversee GPAI (General-purpose AI), contribute to fostering standards and testing practices, and enforce the common rules in all member states.
Either enforcer can impose fines of up to € 35 m or 7% of worldwide annual turnover.4 However, failure to comply with the Act can open the door to civil claims (individual or collective) and fines from other regulators, for example, Data Protection Authorities.
Final remarks
Although it is clear that regulation is needed, the how did not look that obvious; finding the perfect balance between innovation and safety seemed like an impossible task with so many caveats and unforeseen consequences; yet, once again, Europe has positioned itself as a pioneer and global standard setter. Now, it is time to watch and learn.
- News, European Parliament, available on: https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence ↩︎
- he AI Act draft is available on https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF ↩︎
- European Parliament, Artificial intelligence act briefing, available at https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf
↩︎ - European Council, Press Release, “Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world“, 9 December 2023 (“European Council Press Release“), available on https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-council-and-parliament-strike-a-deal-on-the-first-worldwide-rules-for-ai/. Note that this amount is different to original draft of the AI Act (See article 71, on penalties). ↩︎
Leave a comment