Questions? +1 (202) 335-3939 Login
Trusted News Since 1995
A service for technology industry professionals · Friday, May 3, 2024 · 708,553,194 Articles · 3+ Million Readers

CDT Europe’s AI Bulletin: April 2024

Also authored by CDT Europe’s AI Policy Fellow Jonathan Schmidt.

Policymakers in Europe are hard at work on all things artificial intelligence, and we’re here to keep you updated. We’ll cover laws and policies that relate to AI, and their implications for Europe, fundamental rights, and democracy. If you’d like to receive CDT Europe’s AI Bulletin via email, you can sign up on CDT’s website.

The EU AI Act is Adopted 

After two years of intense negotiation, the European Parliament voted on 13 March to adopt the EU AI Act, with 523 MEPs voting in favour, 46 against, and 49 abstentions. While reaching this crucial stage was no easy feat, the Parliament’s rubber stamp does not negate potential hurdles ahead for bringing the new regulation into force. You can read the latest version of the AI Act here.

In CDT Europe’s initial reaction to the Act and subsequent op-ed, we discussed the serious human rights holes that remain in the law. In this month’s AI Bulletin, we dive deeper into those human rights wins and losses, and explain the framework of the legislation. For even more on the key aspects of the AI Act, see our extensive explainer

A Risk-Based Approach

The final AI Act adopts a risk-based regulatory approach, which — in brief — means that all AI systems are categorised as presenting unacceptable, high, or limited or minimal risk. The Act outlaws AI systems posing unacceptable risks, heavily regulates those it considers to pose a high risk, and imposes transparency and information obligations on specific AI systems that it deems a transparency risk.

EU lawmakers agreed that eight types of AI systems pose unacceptable risk and are therefore banned. Those include biometric categorisation systems, real-time facial recognition, and systems that classify individuals into groups based on known or predicted characteristics, namely social scoring. While some of these prohibitions are outright bans, others are limited, or allow for numerous exceptions that present significant concerns: 

  • Emotion recognition is only banned in educational and workplace settings;
  • Criminal profiling is only partially banned, and explicitly allowed to support human assessments of a person’s involvement with a crime;
  • Real-time biometric identification (RBI) in publicly accessible spaces is still allowed in a range of circumstances, with known sites of human rights abuse excluded from the scope of the provision, despite civil society calls for a total ban. Despite the Act’s safeguards, the exceptions allow for widespread scanning in practice.   

Most contentious throughout negotiations was how the AI Act would categorise AI systems deemed high-risk based on use cases. Negotiators ultimately agreed to list high risk-use cases — including but not limited to critical infrastructure, employment, law enforcement, and migration — in Annex III of the AI Act, though Annex III is only intended to serve as an indicator of an AI system or use case’s high-risk nature, not conclusive proof.

We noted with concern that the Act allows an AI provider to self-assess their AI system as not high-risk, even when the area of deployment is listed in Annex III, if they ultimately judge that the system does not pose a significant risk of harm to people’s health, safety, or fundamental rights. This would allow providers to bypass several obligations that apply to high-risk AI systems, including undertaking fundamental rights impact assessments. A provider would only be unable to exploit this self-assessment loophole in cases where an AI system is high-risk because it involves profiling and is deployed in any of the areas identified in Annex III. 

General-Purpose AI

Negotiations over how to approach governance of general-purpose AI (GPAI) systems and their potential impacts were also highly politicised. The final agreement determined that these models pose systemic risk when they have high-impact capabilities, or significant reach and impact to the EU internal market. Lawmakers based the definition of ‘high-impact capabilities’ on the computing power of the model used for the GPAI’s own training; systemic risk is presumed if the value exceeds 10 ^ 25 floating point operations per second — though this numerical threshold is subject to review by the European Commission at a later stage. Once a GPAI has been classified as posing systemic risk based on this criteria, developers will have to respect additional obligations. While they can contest the systemic risk categorisation, final decisions lie with the European Commission. 

Governance Structures 

The AI Act’s risk-based approach is complex, as is its enforcement and governance structure. The Regulation will largely rely on national-level ‘market surveillance authorities’, created by EU product safety legislation pre-existing the Act, to enforce the provisions on AI systems. Every member state must designate at least one market surveillance authority as a point of contact under the AI Act; each may choose to create a brand new market surveillance authority, or appoint one of the many existing options. The choice of entity will be crucial, and differences between member states’ choices may lead to diverging enforcement approaches. 

The Act also provides for a certain level of regional oversight of AI systems; the Commission is tasked with producing guidelines and delegated acts that will inform future interpretation of the AI Act. Beyond the Commission, the newly-created AI Office will play a complementary role, mainly in relation to GPAI models. Other actors that the Act created to support the Commission and AI Office include the European AI Board; an Advisory Forum consisting of a variety of members from civil society, academia, and the private sector; and a scientific panel of independent experts.

A Bittersweet Ending for Those Fighting to Protect Human Rights

The AI Act sets a global precedent as the first major comprehensive law on artificial intelligence. While it ostensibly aims to protect and strengthen human rights and privacy in the digital world, the final text fails to meet the bar on human rights protection, as civil society has previously articulated. Most crucially, the law’s default exemption for national security uses of AI is a significant cause for concern. And, as highlighted above, many prohibitions on certain AI uses have lost their initial momentum.

One other concern is that, while the Act’s creation of a publicly accessible EU database on high-risk AI is a win for transparency and accountability, AI systems deployed in law enforcement, migration, asylum, and border control contexts are to be registered in a non-public section of the database that will only be fully accessible to the Commission. 

A fuller explanation of CDT EU’s reservations on human rights implications of the AI Act is available on our website. Overall, it is clear that EU lawmakers have fallen short of providing all of the necessary safeguards that many had hoped would be defined in the Act.  

Next Steps for the AI Act

The text of the AI Act has now been subject to final review, and this latest version will be deemed approved unless a political group or requisite number of MEPs oppose it within 24 hours of its announcement to the Parliament sitting in plenary. Once the Council endorses the text — as it is expected to this month — the AI Act will be published in the Official Journal of the European Union (likely in June) and enter into force on the ‘twentieth day following its publication.’ Key provisions of the AI Act will become applicable in a staggered manner:

  1. Six months from date of entry into force, prohibitions on unacceptable AI risks will apply.
  2. 18 months after entry into force, obligations related to GPAI models become applicable.
  3. 24 months after entry into force, obligations for high-risk systems listed in Annex III, penalties, and regulatory sandboxes by member states will become effective.
  4. 36 months after entry into force, obligations for high-risk systems not listed in Annex III will apply.

In the Meantime…

The European Commission is set to launch the AI Pact, a voluntary initiative intended to foster industry implementation of the AI Act. It will include knowledge-sharing among private sector companies, and voluntary pledges to work towards compliance, which the Commission will then collect and publish. Meanwhile, there are reports that the Commission is urging member states to appoint their AI regulators before the end of the year. 

The recruitment for the AI Office is currently underway, but its leader has not yet been announced; some Members of the European Parliament (MEPs) have expressed concerns about lack of transparency in the hiring process.

In Other ‘AI & EU’ News 

  • In late March, the UN General Assembly adopted a resolution on AI, calling on states and relevant stakeholders to refrain from using AI systems presenting undue risks to human rights, and on the private sector to observe applicable legal frameworks and the existing UN Guiding Principles on Business and Human Rights. 
  • The Council of Europe (CoE) finalised a framework convention on Artificial Intelligence on 14 March. A month later, the CoE’s Parliamentary Assembly suggested improvements to the draft text. 
  • The UK-based Trades Union Congress (TUC) published a legislative proposal to regulate the use of AI in the workplace.

Content of the Month 📚📺🎧

CDT Europe presents our freshly curated recommended reads for the month. For more on AI, take a look at CDT’s work.  

Powered by EIN Presswire
Distribution channels: Technology


EIN Presswire does not exercise editorial control over third-party content provided, uploaded, published, or distributed by users of EIN Presswire. We are a distributor, not a publisher, of 3rd party content. Such content may contain the views, opinions, statements, offers, and other material of the respective users, suppliers, participants, or authors.

Submit your press release