February 16, 2024

Decoding the EU AI Act: Regulatory Scope, Purpose and Impact

The EU AI Act is the most ambitious and comprehensive AI-specific legal framework developed to date. While this doesn’t necessarily imply that the provisions listed in the act will prove to be effective, it does signal an aggressive AI regulation strategy on behalf of the EU, and in particular, one that is motivated by the intention to standardize AI regulation at the global scale. The next 1 to 2 years, during which the requirements of the act will progressively take effect, will be critical to observe, especially as AI innovates and proliferates.

The short-term successes and failures of the EU AI Act will not only inform subsequent iterations and revisions of the act itself, but also AI regulatory measures, strategies, and initiatives taken outside of the EU. In this respect, the EU is taking a top-down or horizontal approach to AI regulation, having developed a legal framework that is all-encompassing by nature of its intentionally broad scope. In the long-run, it will be interesting to see whether this tactic proves to be more effective than bottom-up or vertical approaches where AI regulations are initially designed narrowly—to address specific kinds of technologies, use-cases, risks, or benefits—and then integrated into larger-scale comprehensive regulatory frameworks.

The regulatory scope of the EU AI Act is complex and multifaceted. Therefore, to make these complexities more easily digestible, we have distilled the insights illustrated in this post into three main areas: 1) high-level regulatory objectives, 2) key actors, and 3) technologies targeted.

As AI continues to evolve and scale, the very foundations of society and democracy will shift, necessitating continued engagement from all members of society, not just the experts, scientists, and regulators. The more we understand about AI and regulation, the more empowered we will be to influence it for the betterment of humanity, ensuring the development and deployment of safe, responsible, and trustworthy AI—this is our mission at Lumenova AI.

High-Level Regulatory Objectives

To fully comprehend the regulatory scope of the EU AI Act, it’s important to first understand the high-level regulatory objectives proposed by the act, especially since many of these objectives will likely remain constant, even as the act undergoes further revisions and changes in response to AI developments. Nonetheless, it’s essential for readers to note that AI regulation is especially difficult to design robustly due to the exponential pace of AI innovation and proliferation, and therefore, some of the high-level objectives we illustrate below may change over time to account for novel AI developments.

The high-level objectives of the EU AI Act are the following:

  • Standardize AI legislation across the Union to maintain and promote EU leadership in trustworthy AI innovation and equitable distribution of AI benefits, and protect EU citizens from potential AI harms or threats to their wellbeing.
  • Focus on technology deployment vs. technology development, to encourage AI innovation—especially for SMEs and start-ups—the establishment of an internal EU market that values the free movement of AI goods and services, the continual use of AI systems for scientific R&D, and the development of regulatory sandboxes where AI developers can safely experiment with novel innovations.
  • Preserve and protect EU democratic values by ensuring that the use of AI systems doesn’t undermine fundamental rights or human autonomy, result in discrimination or the exploitation of vulnerable communities, unchecked surveillance, profiling, emotional recognition, or social scoring, and mass manipulation or coercion.
  • Preserve and protect national security and critical infrastructure by ensuring that high-risk AI systems are verified for reliability prior to deployment, the formation of national supervisory authorities, and that collaborative foreign partners have demonstrated adequate measures regarding the implementation of safeguards aimed at preserving fundamental freedoms and human rights.
  • Promote the development of human-centric and trustworthy AI ensuring that AI is designed as a tool whose intended purpose is to increase human wellbeing, with a focus on ethical AI guidelines, systemic risk assessments and mitigation for general purpose AI (GPAI) systems, and the facilitation of stakeholder awareness, engagement, and public consultation.
  • Emphasize the criticality of transparency and accountability, to counteract digital asymmetry and dark patterns—interfaces designed to trick users into making certain decisions—ensure that consumers can understand and exercise their rights in relation to AI-generated impacts, and that AI systems are transparent and interpretable in terms of their design, function, intended purpose, and use, especially in law enforcement.
  • Maximize AI benefits and minimize AI risks such that substantial positive AI impacts—especially across high-impact domains—can be achieved safely, securely, and responsibly.
  • Implement robust consumer data protection and governance measures, in accordance with existing EU laws such as the General Data Protection Regulation Act (GDPR), and with a special emphasis on biometric data protection and privacy.
  • Promote AI literacy to encourage trustworthy and RAI innovation on behalf of providers and deployers, the preservation of fundamental rights in relation to AI systems, and public awareness concerning the potential risks and benefits of AI systems.

Key Actors

When we refer to “key actors” in the context of AI regulation, we are primarily referring to two different sets of actors: 1) those that are protected, and 2) those that are held accountable. In certain contexts, however, there may be an overlap between key actors. For example, a deployer of a high-risk AI system may be held accountable for unintended harms produced by their system, but not receive penalties insofar as they have demonstrated adequate technical documentation as well as risk mitigation and prevention. These kinds of cases tend to be complex and context-specific, so they are beyond the scope of this post. Still, readers should keep this in mind as we move forward.

The EU AI Act seeks to hold deployers and providers of AI systems accountable, while protecting EU citizens from the potentially harmful effects such systems might produce on their fundamental rights, health, and safety.

Deployers are defined as “any natural or legal person, including a public authority, agency or other body, using an AI system under its authority, except where the AI system is used in the course of a personal non professional activity.” In simple terms, deployers can essentially be any actors, from individuals and businesses to government entities, deploying AI systems in a professional setting or publicly accessible place. Providers encompass the same array of actors, but differ in the sense that they are not actively deploying AI systems, but developing them with the intention to deploy them within the EU.

Moreover, non-EU deployers and providers are also held accountable under the act’s requirements insofar as the output of their systems is intended to be used within the EU. However, in certain cases, public authorities of cooperative foreign partners may be exempt.

Technologies Targeted

What may arguably be viewed as the most innovative characteristic of the EU AI Act is its tiered risk classification structure. Seeing as two significant high-level objectives of the act are to continue fostering AI innovation and enacting AI benefits, the tiered risk classification structure serves to ensure that these objectives can be fulfilled without compromising human safety and wellbeing, fundamental rights, democracy, and Union values.

As evidenced by its name, the EU AI Act targets AI systems, defined by their ability to infer ways to achieve human-set objectives using methods such as machine learning, and logic or knowledge-based approaches, which include GPAI. Such systems are classified in terms of degree of risk they pose, according to either the inherent risk profile of the system in question as a standalone product or component of a product, and the function a system serves within a particular high-risk or high-impact domain, such as healthcare or education. Therefore, we further subdivide this section into three parts: 1) AI systems classified as high-risk, 2) prohibited AI systems, and 3) low-risk AI systems.

High-risk AI systems are classified as:

  • Systems that can generate adverse impacts on the fundamental rights—as expressed in the EU Charter—of EU citizens.
  • Systems that pose a likely threat or risk of harm to human health and safety.
  • Systems that leverage highly sensitive personal data characteristics, especially biometric data.
  • Systems leveraged as safety components in the operation of critical digital and physical infrastructures.
  • Systems leveraged to drive or execute consequential decisions in educational and vocational training contexts such as admissions and educational access, learning evaluation and placement, and monitoring of prohibited behaviors (e.g., cheating).
  • Systems leveraged to drive or execute consequential decisions in employment contexts such as recruiting, promotions, contractual agreements, task allocation, and workplace monitoring.
  • Systems that can influence individuals’ access to essential goods and services such as healthcare, social security, or housing benefits.
  • Systems leveraged for the purposes of evaluating citizens’ credit score or creditworthiness.
  • Systems leveraged for emergency services, such as first responder dispatch or the classification of emergency calls.

Before outlining the kinds of AI systems that are prohibited under the EU AI Act, it’s important to understand that in many of cases illustrated below, exemptions can apply depending on whether the risk of harm is outweighed by potential benefits, for example, if mass surveillance systems are used to locate missing persons or crime victims. Nonetheless, prohibited AI systems are broadly considered to be:

  • Systems leveraged for the purposes of manipulating, exploiting, or socially controlling the behavior of natural persons in the EU by means of deception, behavioral nudging, and vulnerability exploitation.
  • Systems leveraged for biometric categorization to infer sensitive personal characteristics such as preferences, beliefs or ideologies, and sensitive data characteristics, which include fingerprints, political or religious ideology, race, sex, and sexual orientation, to name a few.
  • Systems leveraged for the purposes of mass surveillance, for example, by scraping facial recognition data from the internet with the intent to expand facial recognition databases.
  • Systems leveraged for remote real-time biometric identification in publicly accessible places, especially as they might be used by law enforcement entities, might not necessarily meet the criteria of mass surveillance, but nonetheless cultivate feelings of constant surveillance which undermine fundamental rights such as freedom of assembly.
  • Systems leveraged for the purposes of social scoring, as they affect individuals or groups, in terms of profiling-driven differential treatment outcomes, for example, by determining access to essential goods and services by reference to social creditworthiness.
  • Systems leveraged for the purposes of risk assessment of natural persons in the EU, for example, a system that predicts the rate of recidivism for criminal offenders.
  • Emotional recognition systems leveraged in workplace and educational settings to infer sensitive data characteristics such as an individual’s emotional state.

Low-risk AI systems are classified according to four main criteria:

  • Systems whose intended purpose is to perform narrow procedural tasks, such as data classification or restructuring.
  • Systems leveraged to build upon or enhance previously achieved human objectives, for example, using AI to make a work-related email sound “more professional.”
  • Systems whose intended purpose is to monitor and evaluate human decision making patterns, for example, to understand whether a teacher is deviating from their usual grading pattern.
  • Systems leveraged for the purposes of preparatory assessments, for example, document translation or processing, file indexing, or the creation of data linkages, to name a few.

The tiered risk classification structure of the EU AI Act also implies that the severity of regulatory requirements will correspond with the degree of risk an AI system poses. However, in certain cases, exemptions may apply, for instance, if facial recognition technologies are leveraged in a publicly accessible place to identify and catch the perpetrator of a terrorist act. In other words, whether the use of a high-risk AI system is downright prohibited may depend on the context in which it’s used and the degree to which potential benefits outweigh potential harms.

Key Takeaways

The EU AI Act is a monumental piece of AI legislation that will become increasingly complex as novel AI developments inspire significant changes and revisions to the act—special attention should be paid to near-term regulatory developments regarding GPAI systems. Still, despite the flexible and adaptable nature of the EU AI Act, those who comprehend its core elements and principles will be at an advantage, possessing a heightened capacity to deal with the next wave of compliance requirements.

At Lumenova AI, we are passionate about promoting and streamlining the responsible AI (RAI) and governance process, and we believe that a critical component of this process concerns the ability to easily access clear and insightful information regarding the progression of the AI policy landscape. Consequently, this is the first of several pieces in our EU AI act series, so stay tuned for more! Alternatively, for those who are interested in looking beyond the EU AI Act, follow our blog for the most recent developments in AI regulation, industry trends, and AI risk management.

Make your AI ethical, transparent, and compliant - with Lumenova AI

Book your demo