May 2, 2024

Decoding the EU AI Act: Future Provisions

This post will be less in-depth than our previous content on the EU AI Act, mainly because our other posts in this series have all taken a deep dive into its’ comprehensive structure and potential impacts. However, we do know that certain components of the AI Act still require additional outlining and specification, and when we speak of “future provisions,” this is what we mean.

Still, in line with the forward-thinking and predictive approach we’ve taken in the latter half of our exploration of this topic, we’ll conclude this post by offering some insights on possible future provisions—ones that aren’t explicitly outlined in the AI Act—we think could feasibly emerge.

Conversely, we’ll begin by briefly describing how the AI Act will be applied once it takes effect. However, it’s worth noting that this post is predicated upon all our other EU AI Act content, and we therefore strongly recommend that readers review our Decoding the EU AI Act series for a holistic understanding of this topic.

AI Act Application Details

The EU AI Act was passed in March of this year and will take effect sometime in May or June, depending on when it’s published in the EU Official Journal. However, not all of the Act’s provisions will take effect immediately—some provisions will be applied and enforced in phases, as shown below:

  1. Provisions for prohibited AI systems will be enforced approximately 6 months after the AI Act takes effect, or roughly speaking, by November or December of this year.
  2. Common codes of practice will be established and applied around 9 months after the AI Act takes effect, or, by February or March of 2025.
  3. Transparency obligations for general purpose AI (GPAI) systems will be enforced one year after the AI Act takes effect.
  4. Provisions for high-risk AI systems will be enforced within three years of the AI Act taking effect.

The AI Act will be fully enforceable within two years of taking effect, however, as noted above, certain provisions will be enforced over differential timelines. Still, those subject to the AI Act shouldn’t take this as a signal to delay compliance or by contrast, focus solely on near-term compliance requirements. Two years may seem like a lot of time, but with a legislation that seeks to regulate a technology that has never before been regulated, expect messiness and inconsistency, especially during the early application phases—proactive compliance measures should be taken.

Future Provisions

It can’t be stressed enough how comprehensive the EU AI Act is. However, certain parts of the Act still require additional development, though specific timelines regarding this development have yet to be outlined in many cases, notwithstanding codes of practice. Nonetheless, future provisions are listed below:

  • Regulatory sandbox guidelines will address three core areas: 1) the creation of eligibility and selection criteria, 2) sandbox operating procedures and standardized exit reports, and 3) terms and conditions requirements. The EU Commission, specifically the AI Board, is responsible for ironing out the specifics of these provisions, however, member states will also be required to develop training, awareness, and communication initiatives targeting how to participate and operate within regulatory sandboxes.
  • The AI Board will support the standardization of administrative practices under the AI Act and provide guidance documents on compliance. More abstractly, the Board is also responsible for enabling the creation of a shared regulatory language that promotes easy and effective communication between the various national authorities of different member states.
  • The AI Office will develop standardized templates for compliance with core components of the AI Act. These templates will cover things like how to participate in regulatory sandboxes or how to ensure that GPAI systems are deployed responsibly.
  • The AI Office is responsible for developing and establishing codes of practice for AI deployment procedures across the EU. For GPAI systems specifically, codes of practice will target technical documentation and systemic risk identification and management. The Office will also begin orchestrating social campaigns educating the public on matters of AI Act compliance.
  • The AI office will develop a code specific to generative AI systems, applicable throughout the EU, that targets the authentication and detection of AI-generated content.
  • The EU Commission’s scientific panel will identify and implement tools, methods, indicators, and benchmarks, like impact and risk assessments that evaluate high-impact capabilities in AI systems. Importantly, GPAI providers will be required to evaluate whether their models pose a systemic risk by utilizing state-of-the-art standardized protocols and tools although it’s currently unclear what these protocols and tools might be.
  • The EU Commission will establish notification guidelines for serious incident reports concerning member states’ market surveillance authorities (MSAs). Seeing as MSAs will also have to submit yearly reports on market surveillance activities to the Commission and relevant NCAs, concrete guidelines will also need to be created for these procedures even though they’ve already been roughly outlined.
  • Standardized procedures for competence and resource assessments of member states’ respective national competent authorities (NCAs) will need to be constructed and established. Most likely, these procedures will be developed by the AI Office in collaboration with member states’ NCAs.
  • Formal structures for member states’ biennial reports on their NCA’s resource requirements will be established, most likely by the AI Office or EU Commission at large, and in collaboration with member states.
  • The AI Office, will, at the very least, need to develop a loosely standardized structure and procedure for potential joint investigations conducted on behalf of member states’ MSAs and the Commission in response to violations of the AI Act.

For the most part, this list of future provisions covers the key areas of further AI Act development that are worth paying attention to. In cases where future provisions concern GPAI and generative AI systems, those subject to AI Act requirements would be wise to assume such provisions will enter into force sooner rather than later. All that being said, let’s turn our focus to a few predictions for additional future provisions, not outlined in the AI Act, that we think may come to fruition.


Under the AI Act, several high-risk AI designations are categorized by the use of AI in consequential decision-making contexts, like employment, education, and access to critical goods and services. While the AI Act does provide specific examples of what would constitute consequential decision-making across these sectors, additional guidance and formal standards may be required, especially in cases where the degree of human involvement is unclear or where the impacts of the decision being made are difficult to measure and/or interpret.

Moreover, further specification could be necessary for the development, documentation, and submission of both post-marketing monitoring and sandbox plans. Targeted parameters for what constitutes an appropriate vs. inappropriate plan will need to be developed on a case-by-case basis, to ensure that AI Act provisions are applied fairly in these contexts.

On a different note, generative and GPAI systems could be enormously powerful tools for marketing and sales. While the AI Act explicitly prohibits systems that could be leveraged for social manipulation, exploitation, and control, as well as targeted advertising rooted in differential data characteristics (like race or gender), specific deployment standards for marketers and salesmen may eventually prove necessary. Terms like “targeted advertising” are well-defined and covered by existing legislation like the General Data Protection Regulation Act (GDPR). However, terms like “social manipulation”, particularly in the context of measurable AI impacts, are still poorly understood—this may allow marketers and salesman (potentially other kinds of actors too) to exploit loopholes related to these abstract concepts, which could prompt EU legislators to develop a use-case specific standard with clear regulatory parameters for generative and GPAI systems used for marketing and sales purposes.

More broadly, seeing as the AI Act is designed to be flexible and adaptable, it’s possible that additional standardized mechanisms and procedures for stakeholder feedback will emerge. The EU is a conglomeration of nation-states, and while each of these nation-states is subject to EU law, governance and power-dynamic differences at the national level do exist. To ensure that each nation-state’s voice carries equal weight in consideration of future AI Act revisions and updates, universally agreed-upon stakeholder feedback mechanisms and procedures that build on the role that national authorities, the AI Board, and the Commission play, may prove necessary. Currently, it’s still unclear how regular EU citizens can directly engage with the process of influencing future AI legislation.

Finally, as one of its core high-level objectives, the AI Act aims to promote AI literacy. However, the methods and approaches through which this goal will be exercised remain relatively vague—what AI-specific skills will they cultivate, what risks, benefits, and impacts will they cover, and what AI use cases will they highlight? One can reasonably intuit the direction of such initiatives, but intuition is insufficient, especially when dealing with high-impact technologies and/or those that could pose a systemic or national risk.

Moreover, the question of how to ensure anyone within the EU can easily access and benefit from AI literacy initiatives has yet to be answered. It’s possible that each EU nation-state will adopt its own AI literacy strategy, however, this seems unlikely since it could produce a fragmented and discordant regulatory approach, which directly undermines the AI Act’s goal to standardize AI legislation across the Union. At least during the early stages of AI Act implementation, standardized AI literacy campaigns could prove extremely valuable, especially in terms of raising awareness around AI risks, benefits, and impacts.

There are many more predictions one could make regarding potential future provisions of the AI Act, however, we’ve chosen to highlight the ones above because they represent possible pain points or oversights within the AI Act. That being said, we also recognize that many other pain points worthy of consideration exist—we chose to discuss these because of their imminence, not because they are objectively more important.


Just because this is the final piece in our Decoding the EU AI Act series doesn’t mean we won’t return to this topic in the future. In fact, once the AI Act takes effect, notable legislative challenges, questions, and even revisions are poised to happen. Furthermore, the AI Act may be the first piece of comprehensive AI legislation released and enacted, but as readers might recall from an earlier post covering the influence of the AI Act on global AI regulation standards, this topic will be the subject of ongoing discourse—AI strategies are taking shape at the global scale, from the African Continental AI Strategy to ASEAN’s guide for AI Governance.

Nonetheless, if readers were to take home one message from this post, it should be the following: continually monitor the actions and initiatives of the AI Office and Board since they will be the entities primarily responsible for overseeing and enforcing the AI Act. In other words, any additional provisions will likely come from them, or at least, start with them.

For readers interested in exploring other topics in the AI policy landscape, or venturing into the domains of generative and responsible AI, we invite you to follow Lumenova AI’s blog.

Alternatively, if you’re working on formalizing an AI governance strategy or figuring out how best to approach AI risk management, check out Lumenova’s platform and book a product demo today.

Related topics: EU AI Act

Make your AI ethical, transparent, and compliant - with Lumenova AI

Book your demo