April 4, 2024

Decoding the EU AI Act: Laying the Groundwork for Standardized AI Regulation

In our first post of this series, we provided a high-level overview of the EU AI Act’s regulatory scope and identified several core regulatory objectives, the first of which concerned the standardization of AI legislation across the EU. Now that the EU AI Act has officially been passed, all EU member states are subject to essentially the same set of requirements—a uniform regulatory standard governing AI development and deployment procedures at the EU-wide scale now exists.

Consequently, the question this post seeks to address doesn’t concern the EU, but rather, the rest of the world: what characteristics of the EU AI Act are likely to become universal AI regulatory standards, or in other words, which EU AI Act mechanisms, principles, and objectives might emerge in other AI regulations around the world?

To answer this question, we’ll begin by briefly outlining the motivation for asking it, since it may not be immediately obvious to some. Following this, we’ll dive into the characteristics of the EU AI Act we think will be most influential globally. However, seeing as making future-oriented predictions on the state of AI policy is tricky, especially in consideration of rapid AI advancement, our predictions will be motivated by what we consider to be the EU AI Act’s major strengths—by strengths, we don’t necessarily mean the “best” aspects of the Act, more so those that are likely to remain consistent and foundational independent of future AI developments.

The EU AI Act is poised to generate significant impacts on the AI policy landscape—envisioning the potential nature of these impacts, particularly in a business or institutional setting, will be instrumental to future-proofing AI compliance strategies and understanding the evolution of fundamental human rights as AI proliferates. For readers craving more information on the EU AI Act, or other AI policy and risk management insights, we invite you to follow Lumenova AI’s blog, where you can also explore content on generative and responsible AI.

The EU AI Act: Inspiring Global AI Regulatory Standards

The EU was, and still is, the first international body to establish comprehensive AI legislation—several other countries around the world, including the US, Canada, India, and various nations in Africa and South America, such as Egypt and Brazil, are in the process of building and defining their own national AI strategies and governance protocols. That being said, thinking that the EU AI Act will inspire global trickle-down effects is partially rooted in the EU’s AI policy lead and fairly recent precedent—the establishment of the General Data Protection Regulation Act (GDPR) in 2018.

Data protection laws have been around since the 1970s, and leading up to the early 2000s, these laws primarily sought to protect the privacy and integrity of potentially sensitive data. During this timeframe, most people still equated the concept of data with information, yet over the last 15 years or so, our understanding of data has shifted from a discrete informational resource to a highly valued commodity that fuels some of the most profitable companies in the world. In this respect, the EU’s GDPR was revolutionary for two reasons: 1) it was far more comprehensive than any of its predecessors, and 2) it recognized the real-world implications of the role that data plays today, both as an informational resource and as a commodity.

In the years following GDPR implementation, several foundationally similar regulations began emerging around the globe, from the United States’ California Consumer Privacy Act (CPPA) and Virginia Consumer Data Protection Act (VCDPA) to Brazil’s General Data Protection Law (LGPD), Egypt’s Data Protection Law (Law No. 151), India’s Digital Personal Data Protection Act (DPDPA), and Nigeria’s Data Protection Regulation (NDPR, to name a few. The far-ranging policy and regulatory implications of the GDPR are evident, and the EU AI Act will produce similarly profound consequences.

Like the GDPR, the EU AI Act differs from other early attempts at AI regulation in two major ways—there are many other significant differences, but broadly speaking, the following emerge as the most prevalent: 1) it’s much more comprehensive than any other AI legislation, and 2) it proposes a tiered risk classification structure whereby regulatory requirements proportionately reflect the risk profile of an AI system. As we’ll see in our upcoming discussion, the risk-centric approach of the EU AI Act is what inspires several of its most influential components. However, this isn’t to say that other indirectly related aspects of the Act, such as the promotion of AI literacy, will be any less influential on the global AI policy landscape.

The EU AI Act: Setting Global AI Regulatory Standards

The EU AI Act doesn’t put a stranglehold on European AI providers and developers. On the contrary, it aims to foster an inclusive, diverse, and most importantly, trustworthy AI innovation ecosystem where scalable AI benefits can be fully realized while fundamental rights, democratic values, critical infrastructure, and human health and safety remain protected and preserved. However, promoting and upholding trustworthy AI necessitates versatility and consistency, given the expansive array of evolving AI technologies and their possible use cases, especially across high-impact domains such as healthcare, education, and law enforcement.

As AI innovation and proliferation continues, the risk profiles of AI systems will undergo considerable changes, which will fundamentally alter the nature of the kinds of impacts they generate. Nonetheless, we’ll still need a way to classify these systems in terms of their risks and impacts, otherwise, holding AI providers and deployers accountable to regulatory requirements—in consideration of exponential innovation and the emergence of novel risks—would prove extremely difficult if not impossible. This is precisely why the EU AI Act’s tiered risk classification approach is poised to become a core structural component of AI legislation around the world—it enables a versatile yet consistent approach to AI risk categorization and prioritization.

To be clear, we’re not saying that other international governance bodies will directly copy the EU AI Act’s tiered risk classification approach—in fact, this is highly improbable due to culturally relative values, governance and business structures, and risk prioritization approaches, among several other complex factors. However, we can expect certain high-level mechanisms and principles (listed below), specific to the EU AI Act’s risk-centric approach, to permeate future AI regulations.

Mechanisms

AI risk profiles: AI systems are classified and categorized in terms of risks they pose, decided primarily by reference to the inherent risk profile (i.e., design, function, and intended purpose) of an AI system and the context in which it’s used (i.e., potential use cases). Moreover, AI risk profiles cover three categories: 1) high-risk systems, 2) low-risk systems, and 3) prohibited systems. This simple structure enables regulators to account for the full range of potential and novel AI risks, ensure that regulatory requirements are applied fairly across AI providers and deployers, maintain an inventory of high-risk and prohibited AI systems, and develop a shared language through which to communicate and understand AI risks as they emerge. Since this structure can be easily customized to suit certain governance and social agendas and corresponds with current AI risk management best practices, there’s good reason to believe it’ll emerge in future AI legislation.

Risk categories: Societal AI risks are measured across the following categories: societal, systemic, environmental, critical infrastructure, democratic, and national. These categories span across most if not all areas in which AI systems could generate substantial impacts in a modern democratic nation. In other words, each category offers a lens through which to hone in on the scope of potential AI risks and impacts, the broadness of which makes it non-discriminatory—most democratic nations already care about maintaining societal stability, reducing systemic injustice, promoting environmental sustainability, preserving democratic function, and protecting national security and critical infrastructure, so these categories can be easily mapped onto disparate governance agendas. Moreover, several other prominent AI risk management frameworks and legislations, from the NIST AI RMF and ISO 42001 to the White House Blueprint for an AI Bill of Rights, California Executive Order N-12-23 (EO N-12-23), UK AI Regulation Framework, and Brazil Bill No. 21/2020 also demonstrate a strong interest in addressing AI risks across these categories, indicating that they’re likely to become standard.

Regulatory sandboxes: secure testing and validation hubs that allow AI providers and developers to evaluate their models for safety, trustworthiness, and compliance to manage and address potential risks linked to real-world deployment. Under the EU AI Act, regulatory sandboxes have two main objectives: 1) ensure that high-risk AI systems are deployed safely, and 2) encourage continual responsible AI innovation, especially among start-ups and SMEs. As evidenced by international events like the UK’s AI Safety Summit and the G7 Hiroshima Summit, the need to quickly address AI risks without actively preventing potential AI benefits is globally recognized. Moreover, regulatory sandboxes—also underscored by EO N-12-23 and Singapore’s IMDA—provide a reliable mechanism through which to promote trustworthy AI innovation, conduct secure risk and impact assessments, and inform evidence-based regulatory revision and adaptation, making them highly appealing governance tools for managing AI risks without compromising potential AI benefits.

The EU AI Office and Board: Under the EU AI Act, the AI Office and Board will be central to helping the EU Commission oversee, enforce, and adapt the provisions of the EU AI Act. While some nations may be hesitant to establish centralized AI-specific governance bodies due to concerns about over-regulation or excessive government control, the need for organized and actionable AI expertise is likely to outweigh such concerns—regulators and policymakers are not AI experts, and to design AI laws that are fair, effective, and practical, they’ll need to be well informed, especially when considering how fast AI moves. International actors may not choose as high a degree of centralization as the EU has, however, we’re likely to see many more national AI organizations emerge, early examples of which include the national AI safety institutes of the US, UK, and Singapore.

Principles

Prioritization of AI deployment risks over AI development risks. The most significant AI risks—those that can produce widespread harmful impacts—notably for increasingly capable general-purpose AI (GPAI) systems, stem from deployment. AI is also progressing so fast that specific and targeted attempts to regulate its development—except in cases where pre-identified risks, such as threats to data privacy and cybersecurity, are consistently relevant—are not only misplaced but virtually guaranteed to fail at some point. Moreover, there’s a strong global incentive, motivated by potential AI benefits and competitive international pressures, to support and foster AI innovation. Seeing as most international actors have this pragmatic perspective, it’s reasonable to assume that their future AI legislation strategies will prioritize technology deployment over development.

Accommodating change without losing efficacy or relevance. The EU AI Act’s risk approach leverages a uniform structure, the fine details of which can be updated, revised, and/or changed in light of novel AI and regulatory advancements. Whether this approach possesses enough built-in flexibility and adaptability remains to be seen, however, the base mechanisms underlying its structure, namely AI risk profiles risk categories, and regulatory sandboxes, will continue to be relevant in the future irrespective of AI developments. In simple terms, the core mechanisms of the EU AI Act’s risk approach can maintain robustness and resilience in light of novel AI advancements. Consequently, many international actors are likely to view these mechanisms as foundational to AI governance protocols, especially since they can be adapted to suit the interests of specific societies and their governments.

Distinguishing between GPAI, generative AI, and other forms of AI. AI is often used as a blanket term to describe an extremely wide variety of computational technologies, from robotics and facial recognition systems to content curation algorithms and Large Language Models (LLMs). AI is much more versatile than any other technology created by humans—it can serve a narrow purpose like classifying specific data, or a general purpose, like helping a manager streamline several tasks in their workflow. Given how varied the intended purpose of AI systems can be, it’s critical that regulators can identify disparate AI systems by reference to what they can do, otherwise, they risk designing AI legislation that’s insufficiently targeted, and therefore ineffective. Moreover, the rising global popularity of generative AI and/or GPAI systems like OpenAI’s ChatGPT and Google’s Gemini indicates that certain kinds of AI warrant more attention due to the potential scale of their impacts and use. Fortunately, several countries including the US, Canada, UK, Singapore, China, Brazil, and Japan are already incorporating this principle into their national AI strategies, hinting that it’s becoming a global standard.

Preserving transparency, accountability, and consumer rights. Transparency, accountability, and consumer rights are crucial democratic principles, even in the absence of AI. We can therefore expect them to appear in the vast majority of AI legislation designed within a democratic environment, particularly in light of the role that AI can play in consequential decision-making contexts, such as those involving access to essential goods and services. To this point, the provision of AI-specific consumer rights and protections—for example, the right to know you’re interacting with an AI system/AI-generated content or request human decision-making in lew of AI decision-making—which are also beginning to emerge in other AI legislations such as the California Automated Decision Making Technology Regulations (CPPA ADMTR), Brazil Bill No. 2238, and the UK AI Regulation Framework, is paramount. How closely these principles align with democratic values makes them particularly strong candidates for core principles in future AI legislation designed in democratically-minded countries.

Holding governments accountable for their use of AI. The possibility that governments leverage AI for social control, surveillance, or mass manipulation, even in democratic nations, is very real, especially when considering their influence as market actors. The EU AI Act has addressed this possibility in several ways—most notably through the EU Data Protection Supervisor, Advisory Forum, and Scientific Panel of Independent Experts—and other democratic nations around the world are likely to follow suit since failing to do so could lead to widespread civic unrest. However, it will be interesting to see what specific mechanisms are chosen to enforce this principle at the international scale, from defining AI deployers to include government bodies to establishing independent oversight and review boards dedicated to auditing the government’s AI use.

Promoting international cooperation. The EU is, by definition, an international body, so it doesn’t come as a surprise that the EU AI Act suggests several means by which to promote cooperation with foreign partners, even those outside the EU. However, the reason for which international cooperation is poised to become a core principle of other AI legislations around the world is the following: the process of understanding AI risks and impacts can be dramatically expedited if nations are willing to learn from their foreign partners’ experiences with AI technologies. Additionally, democracies are inherently much more cooperative than their autocratic counterparts, so they are predisposed to adopt this principle in their national AI governance strategies.

Additional Standards

Seeing as the EU AI Act is a risk-centric legislation, the above discussion covers most of the mechanisms and principles we expect will evolve into, or at least influence,  concrete standards throughout the global AI regulation landscape. However, before we conclude, there are a few additional characteristics of the EU AI Act worthy of discussion:

  • Horizontal regulatory structure: the top-down approach of the EU AI Act, which seeks to regulate AI across borders, domains, and industries will appeal to nations with governance structures tending toward higher degrees of centralization, or alternatively, smaller nations that are minimally involved in the global AI ecosystem. For large and diverse nations where highly centralized governments are more difficult to enact and maintain, we’re more likely to see vertical approaches to AI legislation, where regulations targeting specific domains and use cases emerge from the ground up to eventually substantiate a national AI strategy—this approach more closely parallels what’s currently going on in the US.
  • Promotion of AI literacy: corporations and governments around the world are beginning to realize the importance of AI awareness and education, not just from a regulatory perspective, but also from a civic one. The mass commercialization and distribution of powerful state-of-the-art AI systems has enabled virtually anyone with an internet connection to access these tools and do whatever they want with them, leading to problems like plagiarism, IP rights infringement, deep fakes, and the spread of hateful rhetoric and/or misinformation, to name a few. Ensuring a safe and beneficial AI future depends not only on well-crafted regulations but also on the ability to educate civil society at large. An AI-literate population knows how to use AI responsibly, is more equipped to identify and address possible AI-related risks and harms, and can play an active role in informing the future state of AI regulation—to any country designing AI legislation, the value of AI literacy is obvious.
  • Compliant AI systems that still pose a risk: just because an AI system is compliant doesn’t mean it’s not risky—this was a great catch on behalf of EU regulators, especially since many institutions have a “check-the-box” mindset when it comes to compliance. For most other technologies, compliance tends to also cover safety requirements, but with AI, a separation between compliance and safety can be necessary when systems evolve and proliferate at a rate that surpasses regulators’ abilities to craft new legislation or make relevant changes. In the context of the EU AI Act, this mechanism represents a regulatory failsafe that prevents AI actors from taking advantage of a potential loophole—AI providers and deployers can’t offload or operate their AI systems unless they’re deemed both safe and compliant. While we think this regulatory mechanism will eventually become standard, in the short term, we’re more likely to see it emerge in AI legislation coming from nations with a more mature understanding of AI innovation.
  • Proportional fines and penalties: large enterprises can afford to pay much higher fines and penalties than startups or SMEs—non-discriminately fining large enterprises, startups, and SMEs could lead to a concentration of power at the top among big players that can afford to undergo financial losses. Not only would this discourage AI innovation but also reduce diversity within the AI business landscape, making it less competitive and idea-saturated. Regulatory fines and penalties provide a potent compliance incentive for AI providers and deployers, but they should be proportionately balanced to reflect the resources an organization actually has. For any nation wishing to promote and uphold a trustworthy AI innovation ecosystem, adopting this approach, or something resembling it will be necessary—AI organizations need to be held accountable, but they should also feel comfortable taking calculated risks in the interest of furthering innovation and enacting AI benefits.

Conclusion

It’s difficult to envision to what extent future AI innovations will necessitate changes to the EU AI Act. Updates and revisions will surely be required at some point, but seeing as the Act will take effect sometime in May or June of this year, or 20 days after it’s published in the Official Journal, near-term changes are unlikely—the Act will be fully applicable within 2 years of taking effect.

That being said, some requirements outlined in the EU AI Act, such as those for prohibited AI systems and GPAI systems, will be enacted within 6 and 12 months after the Act takes effect, respectively. If any significant changes occur, we expect they’ll be consequences of how the EU AI Act is applied across these two areas—this isn’t to say that some major AI breakthrough made this year couldn’t influence other parts of the Act as well.

Even if certain parts of the EU AI Act change throughout the following year, its influence on other emerging AI legislations around the world will persist. Most international actors are eager to reap the potential widespread benefits that AI can inspire, but they’re also deeply concerned about the complex and evolving array of risks that may emerge. For nations that lack the resources and technical acumen required to develop comprehensive AI legislation from the ground up, pulling from the EU AI Act or leveraging it as a model for AI legislation could prove extremely valuable.

Still, over the following year, we’re likely to witness some AI legislation that profoundly differs from the EU AI Act, most notably in nations with non-democratic governance and value structures. However, for modern democracies, the EU AI Act is poised to set global standards in AI legislation, if not through concrete mechanisms, then through overarching governance and trustworthy AI principles.

Keeping up with the latest EU AI Act developments will be critical for any AI providers and deployers who want to do business in the EU. In this respect, we suggest that readers follow Lumenova AI’s blog to maintain an up-to-date understanding of the most recent EU AI Act changes, trends, and insights. For readers who are eager to embark on their AI governance and risk management journey, we invite you to try out Lumenova AI’s responsible AI platform and book a product demo today.


Related topics: EU AI ACT

Make your AI ethical, transparent, and compliant - with Lumenova AI

Book your demo