April 19, 2024

Decoding the EU AI Act: Influence on Market Dynamics

eu ai act

Market dynamics are influenced by a myriad of multifaceted and frequently interconnected factors, from geopolitical and environmental events, fluctuations in supply and demand, and market sentiment, to GDP growth and employment rates, socio-political trends, and technological advancements, to name a few. If left unchecked, these factors can destabilize markets, hence the importance of regulation—regulations can’t address everything simultaneously, but they can help us deal with many of the consequences such factors might inspire, which is vital for high-impact exponential technologies like AI.

However, applying regulations also introduces another layer of uncertainty. Overregulation can stifle innovation by reducing market competition, efficiency, and growth. Conversely, underregulation can lead to market failures or instability, the erosion of public trust, and violations of consumers’ fundamental rights, such as privacy or autonomy. The EU AI Act is the most significant and comprehensive piece of AI legislation developed to date, but it remains to be seen whether EU lawmakers struck the right regulatory balance—a point we’ll return to in the discussion section of this post.

Regardless of whether the AI Act is balanced enough, its provisions will generate widespread effects throughout the EU AI market. Still, before we go down this rabbit hole, we strongly encourage readers to review Lumenova AI’s EU AI Act series, since this piece is largely motivated by and based on our previous explorations and breakdowns of this topic. For readers interested in further broadening their understanding of AI regulation, or exploring related fields like AI risk management, responsible AI (RAI), and generative AI, we invite you to follow Lumenova AI’s blog.

Promoting EU-Based AI Innovation

It’s not uncommon for innovators to have an anti-regulation mindset, born out of the fear that governance and bureaucracy stifle innovation. While the AI Act draws some clear lines in the sand, it recognizes the importance of fostering AI innovation, both as a mechanism for mitigating future AI-related risks or impacts and for enacting scalable AI benefits. This is a critical point to consider, especially as it concerns the aforementioned mindset—many might assume that the AI Act’s comprehensive and ambitious scope would push EU market actors to outsource AI product development. Though this may be true for some, the majority will likely remain in the EU for a few reasons.

First, the AI Act strongly favors regulating AI deployment over development. As an international body comprised of several wealthy and diverse nations, the EU possesses a rich array of intellectual and financial capital, which, if funneled into the AI ecosystem, could help establish the EU as a global leader in trustworthy AI innovation. To this point, the EU is also experiencing potent competitive pressures from the US and China, who currently lead the AI race. If the EU fails to support vibrant internal AI markets, it risks losing a significant portion of its bargaining power on the global stage, reducing its ability to ensure that the future of AI also aligns with core EU values and principles. Simply put, from the EU’s perspective, promoting EU-based AI innovation is more than necessary—it’s crucial.

Second, the AI Act also targets non-EU AI market actors. Foreign AI providers who offer their products in the EU, as well as providers and deployers whose AI systems’ outputs are used in the EU, are subject to many of the Act’s requirements. While this provision might deter foreign AI actors from entering EU markets, it could produce the opposite effect on EU AI providers and deployers—even if EU providers and deployers chose to outsource product development, the majority of AI products and services they provide would still be intended for an EU audience, which defeats the purpose of outsourcing. In other words, for EU providers and deployers, it’s unclear whether the costs of shifting AI development to a foreign entity outweigh the benefits of dealing with a little less regulation.

Finally, through concrete mechanisms such as regulatory sandboxes and AI literacy initiatives, the AI Act actively encourages trustworthy AI innovation. Regulatory sandboxes allow AI market actors—especially start-ups and SMEs—to securely test and evaluate their products for safety and trustworthiness before deployment. Similarly, AI literacy initiatives, in conjunction with promoting trustworthy/RAI best practices and AI risk awareness, equip individuals with the skills required to detect novel AI use cases and potential benefits across high-impact domains. Together, both these mechanisms serve to promote and sustain EU-based trustworthy AI innovation.

Now that we’ve outlined why the AI Act is likely to promote EU-based AI innovation, we can now consider the more specific question of how it might influence EU market dynamics. In this respect, the following sections will examine what kinds of AI actors and industries will be most affected by the AI Act, concluding with some predictions of market trends that could arise due to the AI Act.

What Kinds of AI Actors and Industries Will Be Most Affected?

As illustrated in our first post of this series, the EU AI Act targets AI providers and deployers. However, due to its tiered risk classification approach, not all AI providers and deployers are held to the same regulatory standard, since such standards are determined by whether an AI system is classified as low-risk/limited risk, high-risk, or prohibited—the severity of regulatory requirements directly corresponds with how risky an AI system is.

Low-risk AI systems are left largely unregulated by the AI Act, so providers and deployers of narrow and/or low-impact AI systems, such as those leveraged for rudimentary data classification, pattern recognition, and document processing tasks, or the enhancement of relatively inconsequential human objectives, like making a cover letter sound more professional, don’t have much to worry about—aside from existing regulations that might be relevant to them.

That being said, it’s important for AI providers and deployers to carefully consider the context in which their system is used—leveraging pattern recognition to evaluate the consistency of a teacher’s grading pattern is fine, but using it to infer a student’s emotional state may constitute an invasion of privacy and trust. As for providers and deployers of limited risk systems, which are classified according to whether they pose a threat to fundamental rights, a higher transparency standard is required by the AI Act, but other than that, compliance requirements are still fairly relaxed, relatively speaking.

By contrast, high-risk AI providers and deployers are subject to far stricter regulatory requirements, mainly in terms of risk assessment and management, data integrity, transparency obligations, and human oversight and validation procedures. Moreover, for a system to be considered high-risk, it doesn’t need to be a standalone product, it can also function as a safety component of a larger product. That being said, high-risk classifications are significantly influenced by potential AI use cases across high-impact domains and industries. Therefore, AI providers and deployers operating throughout the following domains should expect to deal with substantial regulatory hurdles:

  • Education, healthcare, and employment
  • Law enforcement and judicial processes
  • Critical infrastructure
  • Surveillance and biometrics
  • Housing and immigration
  • Finance
  • Emergency services

Just because an AI system is intended to be used or actively leveraged in any of the above domains doesn’t immediately qualify it as high-risk, but it does make a high-risk classification much more likely. Moreover, AI providers and deployers whose systems can play a role in consequential decision-making contexts, such as determining whether to provide a loan based on creditworthiness, offering a promotion, raising a policy premium, suspending a student, monitoring the workplace, or extending a prison sentence can and should expect a high-risk designation—if an AI system could feasibly generate adverse impacts on EU citizens’ fundamental rights, health, and/or safety, regulatory alarm bells will be triggered.

Providers of systems intended for real-time biometric identification and categorization, mass surveillance, emotional recognition in the workplace or education, social scoring, social manipulation and control, and risk assessment of natural persons will, in the majority of cases, be downright prohibited—companies developing surveillance and/or psychometric profiling technologies will likely be most affected by these restrictions. Still, when it comes to deployers, such systems could be leveraged by a wide variety of different actors, however, given their scope, their most likely use cases span across certain factions of government, which include but are not limited to the following:

  • Intelligence, surveillance, and national security
  • Immigration and law enforcement
  • Social goods and services, such as state-subsidized healthcare, housing, or unemployment benefits
  • Criminal justice and judicial procedures

In nuanced cases, where the risk of using a prohibited AI system is outweighed by the benefits it might inspire—for example, leveraging mass surveillance technologies to catch the perpetrator of a terrorist act—exemptions may apply. Nonetheless, there’s another kind of non-government actor who will have to tread very carefully in consideration of prohibited and high-risk AI classifications: very large online platforms (VLOPs), specifically social media companies.

In light of major cases like the Cambridge Analytica scandal, it’s no longer a secret that social media sites can be highly effective tools for mass surveillance, profiling, and social manipulation. One Oxford study revealed that in 93% of the 81 countries surveyed, evidence for industrialized disinformation campaigns, orchestrated via social media sites, was present—content curation and behavioral profiling algorithms can be extremely effective mechanisms for acquiring and disseminating targeted information at scale.

However, such algorithms can also be helpful and useful to consumers, though they undeniably fall within the scope of the AI Act’s tiered risk classification structure. While it’s not yet clear how social media companies and other VLOPs that leverage similar technologies—regardless of whether they’re proprietary or third-party based—like e-commerce sites and search engines will be affected by the AI Act’s provisions, these kinds of organizations should begin seriously considering what compliance might look like for them.

Finally, while generative AI (GenAI) and general purpose AI (GPAI) systems are not inherently classified as high-risk, providers and deployers of these technologies can easily receive high-risk classifications if their technologies display high-impact capabilities or are determined to pose a national or systemic risk—they’re also held to higher transparency and accountability standards.

Given that potential GenAI and GPAI use cases usually cover a much broader range of applications than other kinds of AI, these kinds of systems are disproportionately more likely to be considered high-impact, and therefore, high-risk. Consequently, if we can reasonably expect any actors in the AI ecosystem to minimize EU-based business and development, or even exit the EU market entirely, they could be providers and deployers of GenAI and GPAI systems.

EU AI Market Trends

So, what does all this mean for the EU AI market? At this stage, providing specific answers to this question isn’t yet possible—we need to see how the dynamics of EU AI Act application begin to unfold over the following months to make granular predictions. However, we do have enough material to work with to envision a series of high-level EU market trends that could emerge in the near term, which we describe below:

  • Despite the popularity of GenAI and GPAI applications, EU markets could favor narrow and low-impact AI technologies, since they’re much easier to develop and deploy under the AI Act.
  • EU markets could become dominated by EU-based start-ups and SMEs, as opposed to big international players. For one, regulatory sandboxes and real-world testing provisions mainly target smaller businesses and entrepreneurs, who may otherwise lack the resources required to ensure compliance and safely test or experiment with their products before deployment. Additionally, the enforcement structure of the AI Act proportionately reflects the scale of an AI organization, meaning that the severity of regulatory penalties will vary according to business size and profitability. In other words, even if they violate the AI Act, smaller businesses typically won’t have to pay crippling penalties that big players can easily afford.
  • Some players in Big Tech, such as Google and OpenAI, may limit their GenAI and GPAI product offerings within the EU to minimize compliance costs. However, given their already established market presence in conjunction with the fact that potential profits are essentially guaranteed to outweigh compliance costs, they’re likely to continue doing business in the EU.
  • In addition to well-known international consulting firms, we can expect the emergence of independent EU-based companies that specialize in AI risk management products and services fine-tuned for European AI legislation.
  • As the AI Act takes effect, EU-based companies will need a way to certify compliance. For smaller companies that lack the resources and expertise to resolve this process internally, external services will be required. This could allow private consultancies that offer independent AI audits, compliance, and trustworthy AI certifications to flourish. However, it’s worth noting that these companies would likely need to be government-affiliated, if not government-approved, otherwise potential clients would have little guarantee that their products and services adhere to existing EU AI legislation.
  • Seeing as robust technical methods for evaluating and authenticating AI-generated content—a requirement under the AI Act—have yet to be developed, a lucrative market sub-domain could emerge for such technologies. In other words, EU AI markets may witness a growing influx of AI companies labeling themselves as “AI detectors” or “AI authenticators,” which would in itself present a variety of new regulatory and ethical challenges.
  • For AI providers whose systems perform functions like emotional recognition, biometric categorization, behavioral profiling, surveillance, and risk assessment of natural persons—all of which are considered high-risk applications under the AI Act—market presence might dwindle. However, seeing as most exemptions regarding these kinds of technologies apply to government use cases, notably across domains like national security, critical infrastructure, law enforcement, and human health and safety, a rise in the frequency of government contracts with these kinds of AI providers, despite potential push back from regulators and civil society, could occur.
  • AI training, education, and awareness will play a central role in fostering trustworthy AI innovation throughout the EU—the promotion of AI literacy constitutes a core objective of the AI Act. While the EU will roll out its own centralized AI literacy initiatives, the market will also likely begin offering many novel and diverse AI training services, provided by entities such as major consulting firms, academic institutes and nonprofits, e-learning platforms, or emerging start-ups dedicated to AI literacy.
  • Since regulatory sandboxes are critical to evaluating the safety and trustworthiness of AI systems, AI companies that develop regulatory sandbox tools could flourish in EU markets. For particularly informative but complex and time-consuming safety measures, such as adversarial testing and impact assessments, AI applications designed for these purposes could also gain a strong foothold in the EU market.
  • Understanding how AI is leveraged to drive, execute, or supplement consequential human decision-making processes is ethically and legally crucial, especially when it comes to the provision of essential goods and services. In this respect, research institutes and independent professionals who specialize in the ethics and legality of AI decision-making could find a lot of value and opportunity throughout EU AI markets.
  • AI is poised to both disrupt and transform the social fabric of our world, which will affect the way that fundamental human rights are applied and understood as AI advances. This phenomenon could give rise to government-affiliated nonprofits and legal institutes guided by the mission of understanding the evolution of human rights alongside AI innovation.

We recognize that our EU market predictions don’t encompass all possible market trends that could emerge due to AI Act implementation, nor are we saying that the trends we predict are guaranteed to materialize. However, in consideration of our previous explorations and discussions of the AI Act, these predictions represent our best-educated guesses as to what might happen within EU markets.

Discussion

Before we conclude, let’s return to an earlier question we raised: is the EU AI Act’s regulatory approach balanced enough? In a nutshell, the overall answer is yes, notwithstanding a few exceptions.

In our previous posts on this topic, we discussed the AI Act’s horizontal structure, and whether this all-encompassing regulatory approach will leave enough room for continued AI innovation, experimentation, and development. We think that it will, primarily because the Act strongly favors regulating technology deployment over development and outlines, as one of its core objectives, the promotion of a vibrant and diverse EU-based trustworthy AI ecosystem. However, there is a possibility that provisions specific to GenAI and GPAI systems are too stringent and that this may not only diminish the diversity of AI applications within EU-based markets but also prevent EU citizens from realizing the benefits associated with these powerful technologies.

Still, via mechanisms such as the promotion of AI literacy and regulatory sandboxes, it’s clear that the AI Act has no intention to stifle AI innovation—what remains to be seen is whether such mechanisms will work as intended or produce unexpected negative externalities, though we’re optimistic that they’ll prove effective. Moreover, by nature of directly targeting AI providers and deployers, the AI Act adopts a scale-oriented perspective that protects the rights of EU citizens and vital institutions without subjecting them to undue bureaucracy.

Furthermore, the tiered risk classification structure of the AI Act is quite reasonable overall—it directly corresponds not only with EU core values but also with the general value structure of most functioning democracies. Gauging the risks AI systems pose by reference to the adverse impacts they could generate on fundamental rights, health and safety, national security, critical infrastructure, and democracy constitutes a well-rounded AI risk strategy. While it’s possible that additional domains of interest have not been covered, this isn’t a major cause of concern because the AI Act was designed to be flexible and adaptable, and is therefore amenable to changes.

There is, however, one component of the AI Act that could be interpreted as a regulatory oversight: the use of AI for scientific R&D or military operations is wholly unregulated. In both of these domains, AI could inspire profoundly positive impacts, such as by rapidly accelerating the development of disease treatments and cures or by helping the military conduct far more targeted operations that dramatically reduce the risk of civilian casualties. Nonetheless, AI could also fast-track the development of extremely dangerous technologies in these areas, such as bio-engineered pathogens, lethal autonomous weapons systems, or even surveillance technologies intended for foreign markets.

Opting not to create AI-specific regulations for these domains was likely a strategic decision on behalf of the EU, especially since many of the technologies covered, such as those for biometric categorization, have dual-use cases that fall within the scope of the AI Act. Moreover, it’s possible that rigorous existing standards, policies, and procedures—like ethics review boards or international law—across these domains already capture most potential AI risks. Alternatively, the EU could just be more concerned with getting things right from the get-go, feeling that a mature understanding of AI impacts across scientific R&D and the military has yet to be arrived at. In other words, poorly designed regulation could be more harmful than no regulation at all. Still, the clock is ticking and regulation will soon be needed throughout these two areas.

To follow recent developments in the AI policy landscape, and gain high-level insights into the most influential current and future AI legislations, follow Lumenova AI’s blog, where you can also dive into a content pool that explores the latest advancements in AI risk management, responsible, and generative AI. If you have specific concerns about compliance or risk management, we invite you to check out Lumenova AI’s RAI platform and book a product demo today.


Decoding the EU AI Act Series

Decoding the EU AI Act: Scope and Impact

Decoding the EU AI Act: Regulatory Sandboxes and GPAI Systems

Decoding the EU AI Act: Transparency and Governance

Decoding the EU AI Act: Standardizing AI Legislation

Decoding the EU AI Act: Influence on Market Dynamics

Decoding the EU AI Act: Future Provisions


Related topics: EU AI Act

Make your AI ethical, transparent, and compliant - with Lumenova AI

Book your demo