November 13, 2025

A Summary of prNE 18286: Quality Management System for EU AI Act Compliance

prNE 18286: Quality Management System for EU AI Act Compliance

In October, the CEN-CENELEC JTC 21 released a draft of prEN 18286 (Quality Management System for EU AI Act Compliance), the first EU AI Act harmonized standard to reach the enquiry stage. Importantly, during this stage, the draft is circulated to all CEN and CENELEC national members to gather technical comments, assess support level across EU member states, and isolate substantive objections or deviations requiring resolution. The enquiry process is likely to last several months, and the prEN 18286 will not become a formal harmonized standard until it is voted upon, adopted by EU member states, and published in the Official Journal of the EU

Consequently, this post will offer an early-stage, detailed breakdown of prNE 18286, though, due to anticipated changes and revisions, we’ll hold off on discussing the potential implications of this standard until it’s official. Should any substantial developments occur in the near term, we’ll ensure readers are kept up-to-date. 

prNE 18286: Detailed Breakdown 

For this breakdown, we’ll avoid referencing specific clauses and sub-clauses, since this may create confusion if/when modifications are made to the standard. Nonetheless, we move through the prEN 18286 systemically, mostly following the structure of the original draft (linked above). 

  • Overview: prEN 18286 (hereafter referred to as “the standard) establishes a Quality Management System (QMS) framework for AI providers needing to demonstrate conformity with the EU AI Act’s AI system requirements, notably Article 17, which outlines quality management systems. The standard currently contains ten normative clauses, five informative annexes, and is designed to be compatible with existing management systems, specifically ISO 9001, ISO 13485, and ISO/IEC 42001. Overall, the framework provides a lifecycle-based approach, covering AI system design, deployment, monitoring, and retirement.

Important Note: Once prEN 18286 is finalized and published in the Official Journal of the EU, AI providers who can demonstrate conformity with it will be granted a presumption of conformity, meaning that they have met the legal obligations set by Article 17 of the EU AI Act, unless relevant authorities can prove otherwise. 

First and foremost, the standard centers on AI system providers (i.e., developers/entities placing AI systems on the market or into service under their name/trademark), effectively covering organizations of all sizes within or entering the EU market, and applying to systems functioning as standalone products or components of a broader system (e.g., AI modules). Ultimately, providers must show how their QMS ensures the protection of health, safety, and fundamental rights throughout the entire AI lifecycle. While we recommend that readers review the standard directly for all terms and definitions, we offer some key summaries below: 

  • AI System: A machine-based and possibly adaptive (post-deployment) system that can generate outputs capable of influencing real or virtual environments. 
  • Provider: An entity that develops or places an AI system on the EU market under its own name/trademark. 
  • Deployer: An entity that utilizes an AI system under its authority (personal use excluded). 
  • Affected Person: Any individual or group that is impacted by an AI system’s operation. 
  • Fundamental Rights: The rights and freedoms protected by the EU Charter. 
  • Serious Incident: An event that causes serious harm, significantly infringes upon fundamental rights, or causes death. 
  • Substantial Modification: Any change that affects the intended purpose or compliance of an AI system, triggering a new conformity assessment. 

Quality Management System Requirements

General Requirements 

AI providers are required to establish, document, implement, maintain, and continually improve their QMS to showcase EU AI Act compliance. The QMS must include: 

  1. Plans for identifying and monitoring regulatory obligations. 
  2. Processes for ensuring the protection of health, safety, and fundamental rights. 
  3. Mechanisms for post-market monitoring and incident reporting. 
  4. Protocols for maintaining traceability, documentation, and transparency. 

As for regulatory requirements, providers should first determine and review which EU and national AI regulations are applicable, identify which requirements qualify as essential, isolate post-market monitoring and serious incident reporting duties, and establish data management and retention strategies. 

The QMS scope, which must be documented and updated when changes occur, should outline which AI systems/organizational units are included, necessary domain/sector/geographical coverage, and, for multi-organization providers, the roles and interfaces of each entity involved in the QMS. Crucially, the QMS must operationalize seven essential requirements, which mirror Chapter III, Section 2 of the EU AI Act: 

  1. Risk Management System 
  2. Data & Data Governance 
  3. Technical Documentation 
  4. Record-Keeping 
  5. Transparency & Information Provision to Deployers 
  6. Human Oversight 
  7. Accuracy, Robustness & Cybersecurity 

Finally, providers must record how compliance is achieved by specifying which harmonized standards/common specifications/internal technical methods are utilized, how each of these measures addresses/resolves identified risks and essential requirements, and explaining the rationale motivating the selection of appropriate solutions. 

Documentation Requirements 

Documentation requirements cover three core areas: QMS documentation, operational documentation, and document control. 

  1. QMS Documentation → Includes the QMS scope, quality policy, and objectives, relevant processes, roles, procedures, and evidence, planning and control methods, and version-controlled, auditable, and accessible records written in official EU languages. 
  2. Operational Documentation → Formal protocols for the planning, operation, and control of AI system processes, which include communication records, traceability files, and audit trails. 
  3. Document Control → All documentation must guarantee format integrity and confidentiality, version control and approval workflows, change traceability, and retention for regulatory minimums. 

Management Responsibility

Top management is tasked with integrating QMS requirements into business processes, allocating adequate resources for sustaining and fulfilling said requirements, ensuring the overall effectiveness of the established QMS, and communicating the importance of regulatory compliance with all relevant stakeholders. 

Management must further provide a formal, documented quality policy that covers the organization’s commitment to applicable regulations and continual improvement processes, a framework for meeting quality objectives and standards, how the quality policy aligns with the organization’s regulatory strategy, and how the quality policy will be distributed to key personnel. 

In this context, providers must also ensure that roles and responsibilities are assigned to personnel with relevant experience, that a compliance manager (or equivalent) is designated as responsible for QMS performance, and that all applicable roles, authorities, and reporting lines are well documented and communicated. 

Planning & Support Processes 

Planning provisions target two domains: risk-based planning and quality objectives. Support processes, on the other hand, concern resources, competence, and communication. 

  • Risk-Based Planning: Providers must identify and address the array of risks and opportunities that can potentially affect their QMS and ability to fulfill regulatory objectives. Their final risk-based plans must ensure that all suggested actions are proportionate to an AI system’s possible impacts on health, safety, and fundamental rights. 
  • Quality Objectives: Objectives must be measurable, realistic, and consistent with the established quality policy, aligned with fundamental rights protection and EU AI Act obligations, and specify who, what, when, and how results will be evaluated. 
  • Resources: Providers must determine and provide the human, technical, and infrastructural resources required to implement and uphold their QMS, including any necessary data/computational resources and secure supply chain mechanisms. 
  • Competence: Procedures to guarantee core personnel competency via education, training, and evaluation must be implemented, and evidence of ongoing qualifications and training efficacy must be maintained. 
  • Communication: Internal and external communication protocols should cover relevant authorities (e.g., market surveillance authority, notified bodies), distributors/importers/deployers, incident and non-conformity notifications, and the delivery of compliance documentation. 

Product Realization 

Risk Management System 

Risk management processes must be applied continuously throughout the entire AI lifecycle, and include mechanisms for identifying, evaluating, and mitigating risks, processes for traceability to lifecycle stages and design decisions, and residual risk documentation. Importantly, lifecycle stages must be clearly defined as design, development, testing, deployment, maintenance, and retirement, and mapped with corresponding and appropriate controls, responsibilities, and verification steps.

  • Note: Providers must also identify environmental impacts like energy use, emissions, and lifecycle materials, and correspondingly implement appropriate mitigation and sustainability measures. 

For design and development specifically, providers must include relevant plans, explanations of design input/output controls, and descriptions of verification, validation, and change management processes; all changes should be risk-assessed and subsequently documented. Moreover, verification and validation approaches must establish and implement repeatable procedures that can ensure outputs consistently meet input and application requirements. Where/when high-risk AI systems are employed, providers should be ready to administer independent evaluations as necessary. 

Separately, providers must establish comprehensive data management procedures for: 

  • Data acquisition, labelling, storage, filtration, mining, aggregation, and retention. 
  • Lifecycle-specific data handling (e.g., training, validation, testing, post-market monitoring, etc.). 
  • Data destruction and reuse policies when systems are decommissioned. 

For each AI system, providers must maintain documentation that: 

  • Demonstrates its conformity with the EU AI Act. 
  • Describes design rationale, architecture, datasets, and testing protocols. 
  • Includes configuration and version information (i.e., software bill of materials). 
  • Contains and describes deployment instructions, accuracy, robustness, and cybersecurity information. 
  • Is continuously updated to reflect system modifications. 

Operation & Control 

For deployment and monitoring, providers must ensure the existence of deployer-facing mechanisms for risk reporting and mitigation guidance, while also establishing their own plans and procedures for deployment, feedback collection, and issue resolution. Additionally, all external suppliers (i.e., data, model, and service providers) must be qualified and monitored such that the extent of controls applied is proportional to the supplier’s potential impact on health, safety, and fundamental rights. All supplier contracts should clearly define conformity obligations, verification, and change management requirements. 

Before any changes to AI systems are made, formal change-control procedures must be implemented, covering how changes are identified, evaluated, documented, and approved. These procedures should also include any pre-determined change plans for adaptive/continuous learning systems, as well as version identification and monitoring thresholds. 

Every individual AI system (or family of systems) must be accompanied by a risk-proportional post-market monitoring plan, whose results are fed into continuous improvement and risk management updates, and that collects and analyzes data on: 

  • Usability, oversight efficacy, and user experience. 
  • Incidents, malfunctions, near-misses, and discriminatory outcomes. 
  • Environmental and operational context changes. 

When serious incidents occur, providers must initiate corrective actions, perform root-cause analysis, document lessons learned, and update technical files while also informing the competent market surveillance body and, if applicable, the notified body (reporting timescales differ slightly based on the nature of the incident, though providers should assume immediate reporting is required). On top of this, providers must continuously detect, record, and address nonconformities, evaluating causes and the need for action, verifying the efficacy of implemented corrections, and making appropriate QMS changes if necessary. 

Performance Evaluation & Improvement 

For their QMS to effectively maintain compliance with the EU AI Act while protecting health, safety, and fundamental rights, providers must regularly review risk-control efficacy, quality-objective achievement, and stakeholder feedback, including feedback obtained from affected persons. 

More specifically, only top management is required to periodically review policy adequacy, objectives, and results, incidence and corrective actions, and opportunities for improvement. All such reviews must ultimately be documented in detail and include escalation and confidentiality mechanisms for internal whistle-blowing. 

Finally, providers must ensure that all QMS changes are planned, resourced, and documented while also taking steps to continually enhance the effectiveness of their QMS using the following mechanisms: 

  • Audits, continuous monitoring, and data analysis. 
  • Corrective and/or preventive actions. 
  • Innovation and feedback integration. 

Annexes 

The standard incorporates five annexes: Annexes A through D, and Annex ZA. We’ll briefly summarize each of these annexes below: 

  1. Annex A: Articulates a structured process for engaging affected persons, outlining that consultations must begin at the design phase and continue through testing and deployment, that engagement measures are inclusive, accessible, age-appropriate, and capacity-building, and that consultation outcomes directly inform risk identification and mitigation procedures. 
  2. Annex B: Describes the relationship with other prEN AI standards, including 18228 (AI Risk Management), 18284 (Data Quality & Governance), 18282 (Cybersecurity for AI), and 18229-1/-2 (AI Trustworthiness Framework, Parts 1 and 2). 
  3. Annex C: Maps prEN 18286 clauses to ISO 9001, to enable integrated audits. 
  4. Annex D: Links QMS processes to the AI Management System control (ISO/IEC 42001 Annex A), for complementarity (i.e., prEN 18286 → quality/regulatory conformance; ISO 42001 → governance and risk framework). 
  5. Annex ZA: Illustrates a direct mapping between each QMS requirement and the requirements set by EU AI Act Articles 11, 17, and 72. 

Conclusion 

We’ve now provided a detailed breakdown of prEN 18286; we’ll wrap up with a series of standard-specific key takeaways, expanded upon below: 

  • prEN 18286 is a cornerstone harmonized standard for the EU AI Act: The formal QMS framework allows AI providers to showcase direct compliance with the EU AI Act’s provisions and obtain a presumption of conformity once the standard is cited in the Official Journal of the EU. 
  • The standard is provider-centric: It applies to entities that develop, market, or put AI systems into service under their own name or trademark, requiring full lifecycle accountability, even if development is outsourced. 
  • Translates Article 17 requirements into an auditable management system: Regulatory, technical, and ethical requirements are unified within a single framework and aligned with other key standards like ISO 9001 and ISO/IEC 42001. 
  • Tight Alignment with ISO 9001 and ISO/IEC 42001: While providers can’t demonstrate conformity with the standard solely via compliance with relevant ISO standards, prEN 18286 requirements can be integrated into existing AI quality/management systems, allowing providers to pursue joint, not separate, audits for compliance. 
  • Lifecycle risk management is the heart of the standard: Providers must manage quality, risk, and compliance from AI design to decommissioning, applying proportional controls based on system risk and impacts on health, safety, and fundamental rights. 
  • Protection for fundamental rights is mandatory: All AI processes (i.e., design, data management, validation, testing, deployment, monitoring, etc.) must ensure that potentially affected persons’ fundamental rights are protected and preserved. 
  • Seven essential requirements form the compliance backbone: These requirements are operationalized in the standard, and can be found in Chapter III, Section 2 of the EU AI Act. 
  • Comprehensive documentation and traceability are non-negotiable tenets: Providers are required to show how each AI process they pursue guarantees compliance, safety, and fundamental rights protection, relying on mechanisms like the QMS manual, lifecycle records, and regularly updated technical documentation.  
  • Top management has specific responsibilities: Leadership is accountable for approving policies, allocating resources appropriately, reviewing performance, and validating the integration of EU AI Act obligations with business operations. 
  • Data management is a regulated process: Controls for data quality, provenance, labelling, bias mitigation, storage, retention, and destruction must be implemented and accompanied by documented procedures for data reuse and repurposing. 
  • Supply-chain and third-party governance constitute formal obligations: Providers have to qualify, monitor, and contractually bind suppliers, model developers, and data providers to equivalent conformity and cybersecurity requirements. 
  • Post-market monitoring and incident reporting are continuous processes: Feedback collection, system performance analysis, relevant documentation updates, and serious incident reporting are processes that must be pursued and upheld continuously. 
  • Environmental sustainability is an explicit quality objective: The identification, documentation, and mitigation of environmental impacts associated with AI lifecycle processes is mandatory. 
  • Participatory governance via engagement with affected persons: Annex A lays out a participatory governance protocol to ensure those affected by AI systems are adequately consulted throughout the lifecycle to inform risk identification, mitigation, and continuous improvement. 
  • prEN 18286 operationalizes the EU AI Act: The standard bridges the gap between the abstract and concrete, converting regulatory principles into verifiable management practices. 

While waiting for further developments, and eventually, formal publication in the Official Journal of the EU, we recommend that organizations or entities subject to the EU AI Act’s provisions focus on obtaining conformity with current ISO AI management/quality standards, specifically ISO 9001 and 42001. Although compliance with these standards is insufficient for conformity with prEN 18286, it will alleviate much of the foundational brunt work required to comply with it without elevating the compliance burden organizations can expect to face. 

For readers interested in exploring more content on AI regulation, risk management, and governance, we recommend following Lumenova AI’s blog, especially if you need to track and understand the latest updates in the AI landscape. 

For those who wish to take practical steps toward improving and streamlining their AI governance and risk management initiatives, we invite you to check out Lumenova AI’s responsible AI platform and book a product demo today. 


Related topics: EU AI Act

Make your AI ethical, transparent, and compliant - with Lumenova AI

Book your demo