June 10, 2023

What Businesses Need to Know About the EU AI Act

Blog image

The EU AI Act, also referred to as the “Proposal for a Regulation Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act)” or, simply, the AIA, is a draft regulation put forward by the European Commission on 21 April 2021.

The proposed law is designed for AI users, developers, and providers, outlining their respective responsibilities when using or deploying artificial intelligence within the European Union.

As the world’s inaugural comprehensive regulation pertaining to AI, the EU AI Act seeks to establish an international benchmark for AI systems across multiple sectors and industries.

Overall, the legislation aims to create an environment where AI technologies are safe, transparent, traceable, non-discriminatory, environmentally friendly, and overseen by human intervention to prevent any negative or downright disastrous consequences.

The Purpose of EU’s AI Act

The purpose of the EU AI Act is, in short, to ensure the responsible and secure use of AI systems within its borders. However, it is important to note that the AI Act does have an extraterritorial reach - as it applies not only to entities operating within the EU but also to companies based outside the bloc that develop and deploy AI systems within EU territories.

This inclusive approach ensures that all organizations engaging in AI-related activities in the EU adhere to the regulations set forth by the Act.

By adopting a risk-based approach, the Act prohibits AI use cases that pose an unacceptable level of risk, while permitting high-risk AI systems to be deployed under the condition that they meet mandatory requirements and undergo thorough ex-ante conformity assessments.  This assessment process applies both prior to the introduction of high-risk AI systems into the market and throughout their lifecycle to ensure ongoing compliance.

To enforce compliance, the AI Act imposes substantial fines for prohibited AI practices, while stringent restrictions are placed on high-risk AI applications.

By prioritizing safety, transparency, and accountability, the EU seeks to foster an environment where AI technologies can thrive while protecting individuals' rights and ensuring the responsible deployment of AI systems.

More on the EU AI Act’s Risk-Based Approach

As previously mentioned, the EU AI Act proposes a risk-based framework that categorizes AI applications into four distinct risk levels: unacceptable risk, high risk, limited risk, and minimal risk. Here’s what you need to know.

Unacceptable Risk: Upholding Safety and Rights

AI systems that pose a clear and undeniable threat to the safety, livelihoods, and rights of individuals fall into the category of unacceptable risk.

These include AI systems that employ subliminal techniques or tactics to manipulate behavior, exploit vulnerabilities of individuals or groups, or work towards social scoring amongst others.

The deployment of these systems is unequivocally prohibited within the EU.

High Risk: Addressing Critical Areas and Impacts

AI systems identified as high-risk are those playing significant roles in areas that can have a profound impact on individuals and society.

High-risk AI systems can be found across various domains, taking the form of:

  • Critical infrastructures (e.g., transportation) that could potentially endanger citizens' lives and well-being.
  • Educational or vocational training systems that have far-reaching implications for access to education and professional pathways, including the scoring of exams.
  • Safety components of products, such as AI applications in robot-assisted surgery, where precision and reliability are paramount.
  • Employment management systems affecting workers' rights and opportunities for employment, including CV-sorting software used in recruitment procedures.
  • Essential private and public services, like credit scoring, which directly impact citizens' ability to secure loans and financial opportunities.
  • Law enforcement systems that require careful evaluation to prevent interference with fundamental rights and maintain the reliability of evidence.
  • Migration, asylum, and border control management systems, which are crucial for verifying the authenticity of travel documents and ensuring the security of borders.
  • The administration of justice and democratic processes, where applying the law to concrete sets of facts requires responsible and accountable AI systems.

Limited Risk: Enhancing Transparency and User Awareness

Within the EU AI Act’s risk-based framework, limited risk pertains to AI systems that carry specific transparency obligations.

For instance, when interacting with AI systems like chatbots, users should be informed that they are engaging with a machine rather than a human.

This transparency empowers individuals to make informed decisions about continuing the interaction or stepping back.

By emphasizing transparency in limited-risk AI systems, the EU AI Act aims to enhance user awareness and ensure responsible AI use while maintaining a balance between innovation and user rights.

Minimal or No Risk: Encouraging Innovation and Accessibility

The EU AI Act allows for the unrestricted use of AI systems categorized as minimal or no risk.

This category encompasses applications such as AI-enabled video games or spam filters, which have minimal potential to harm individuals or society.

These AI systems are considered to pose negligible risks, contributing to various domains without compromising safety, privacy, or fundamental rights.

Obligations for High-Risk AI Systems: Ensuring Accountability and Safeguards

Under the proposed EU AI Act, high-risk AI systems will face rigorous obligations before they can enter the market. These obligations are designed to ensure accountability, minimize risks, and protect individuals' rights.

The requirements for high-risk AI systems include:

  • Adequate Risk Assessment and Mitigation: High-risk AI systems must undergo thorough risk assessment procedures, identifying potential hazards and implementing measures to mitigate those risks effectively.
  • High-Quality Datasets: The datasets used to train high-risk AI systems must adhere to strict standards to minimize risks and prevent discriminatory outcomes.
  • Activity Logging and Traceability: High-risk AI systems are required to maintain comprehensive logs of their operations, enabling traceability and facilitating accountability for their results.
  • Detailed Documentation: Extensive documentation must be provided, offering authorities all necessary information about the system and its purpose.
  • Clear and Adequate User Information: Users interacting with high-risk AI systems must receive transparent and understandable information about the system’s capabilities, limitations, and potential implications.
  • Appropriate Human Oversight: High-risk AI systems should incorporate measures to ensure appropriate human oversight, minimizing the potential risks associated with automated decision-making.
  • Robustness, Security, and Accuracy: High-risk AI systems are expected to meet high standards of robustness, security, and accuracy.

Enhancing Accountability for Foundation Models and Generative AI

As previously mentioned, transparency is a core principle emphasized by the AI Act, requiring AI systems to disclose AI-generated content and establish safeguards against the generation of illegal or harmful content.

Hence, the most recent draft  of the Act also places specific obligations on “foundation models,” such as large language models and generative AI, which are required to undergo rigorous safety checks, adhere to robust data governance measures, and implement risk mitigation strategies.

Therefore, foundation model providers like OpenAI and Google will also be bound by explicit obligations under the EU AI Act. They must demonstrate diligent risk assessment and mitigation, provide extensive technical documentation, and ensure data governance and sustainability.

The EU AI Act aims to enhance responsibility and transparency, urging foundation model providers to meet rigorous standards, protect user rights, and drive ethical advancements in AI technology.

The Impact of the AI Act: Shaping a Global Standard

The EU’s AI Act is poised to become a global benchmark, following the footsteps of the GDPR. Brazil has already enacted a similar bill.), while Germany supports the Act with proposed improvements. Its impact will vary across sectors, with regulated products and transparency requirements for human-interacting AI systems standing out.

Globally, AI systems integrated into regulated products will experience substantial implications. However, the extent of these effects will depend on existing markets, international standards bodies, and the stances of foreign governments.

The introduction of transparency requirements for AI systems interacting with humans will result in widespread disclosure practices across websites and applications on a global scale. This will foster greater transparency and accountability in AI-driven interactions.

As the EU takes the lead in shaping AI regulations, the impact of the AI Act will extend far beyond its borders, influencing the trajectory of AI development worldwide and laying the groundwork for responsible and transparent AI practices.

Regulatory Measures

In an effort to promote innovation, the AIA encourages the set-up of regulatory sandboxes and measures aimed at alleviating the regulatory challenges faced by SMEs and start-ups.

Additionally, the proposal suggests the establishment of a European Artificial Intelligence Board to facilitate collaboration among nations and ensure adherence to the regulation.

Recognizing the importance of citizen empowerment, the Act also grants individuals the right to file complaints against AI system providers.

To ensure effective enforcement, provisions are made for the creation of an EU AI Office tasked with monitoring compliance with the legislation.

Furthermore, member states are required to appoint national supervisory authorities for AI, reinforcing oversight at the national level.

Stringent Penalties for Prohibited AI Practices

The AI Act introduces robust penalties for engaging in prohibited AI practices, as violations can result in fines of up to €40 million ($43 million) or up to 7% of a company’s worldwide annual turnover - whichever amount is higher.

These penalties surpass the fines outlined in Europe’s renowned data privacy law, the General Data Protection Regulation (GDPR), as the EU strives to establish a strict deterrent framework in order to ensure compliance with the AI regulations.

The Road Ahead for the AI Act

Following the anticipated plenary vote by MEPs this summer, the AI Act will proceed to the final stage of the legislative process: “trilogue” negotiations involving the European Commission, Parliament, and the Council.

The latest draft of the AI Act reflects the incorporation of numerous ethical AI principles advocated by organizations.

However, some voices, such as the Computer and Communications Industry Association, have expressed concerns regarding the expanded scope of the AI Act, cautioning that it may inadvertently encompass harmless forms of AI.

Guidance for Businesses

Businesses must prioritize their understanding of the EU AI Act and its impact on compliance. Staying informed about the evolving regulations and laws concerning AI is essential to avoid substantial penalties.

The AI Act encourages a risk-based approach to categorize AI applications and the implementation of regulatory sandboxes to strike a balance between innovation and ethical practices.

Nevertheless, there are valid concerns that warrant the attention of businesses worldwide, such as the extraterritorial reach of the legislation, the implications of strict obligations, potential challenges in interpretation, and the effects on small-scale providers and startups. These issues require careful consideration and resolution.

Lumenova AI - Your Trusted Guide in the EU AI Act Journey

Lumenova AI’s state-of-the-art Responsible AI Platform is designed to empower your compliance efforts, so your organization can adhere to the EU AI Act’s requirements.

  • Compliance Oversight: Our proprietary frameworks help you identify and prioritize compliance areas effectively, giving you full control over your AI systems.
  • Stay Ahead of AI Policy: Leverage our Policy Library to stay up-to-date with the latest AI laws, regulations, and industry standards. We make it simple for you to incorporate and operationalize evolving compliance requirements.
  • Streamlined Governance Processes: Integrate our platform seamlessly with your existing AI systems and tools. Embed responsible AI practices effortlessly into your workflows, ensuring compliance without disruption.
  • Foster Collaboration: Encourage collaboration among diverse teams, including data scientists, risk and compliance officers, product teams, and business leaders.
  • Scalable Model Governance: Establish a robust model governance program that ensures consistent practices throughout the AI lifecycle and across teams. Achieve efficiency, reliability, and compliance at scale.

Compliance with the EU AI Act is vital for your organization’s prosperity, and we understand its significance. Discover the seamless integration of Lumenova AI with your existing ML stack, simplifying your compliance journey. Get in touch with us at sales@lumenova.ai or via contact form.


Related topics: EU AI ACT

Make your AI ethical, transparent, and compliant - with Lumenova AI

Book your demo