HRAIRegulation

Technical Documentation for High-Risk AI Systems

By August 12, 2025December 14th, 2025No Comments

Introduction

The EU AI Act introduces one of the most comprehensive legal frameworks for artificial intelligence to date. At the heart of its obligations for high-risk AI (HRAI) systems is the requirement to prepare and maintain technical documentation. This is not simply an administrative exercise. It is a legal necessity that underpins transparency, accountability and trust in AI systems that have the potential to significantly impact safety or fundamental rights.

Under Article 11, the Act requires providers of HRAI systems to create this documentation before the system is placed on the EU market or put into service. It must remain accurate and up to date for as long as the system is in use. Regulators may request it at any time to assess whether the system complies with the Act’s requirements.

What is Technical Documentation under the EU AI Act?

Technical documentation, as defined in the Act and detailed in Annex IV, is a structured record that describes how an HRAI system is designed, developed, tested and maintained. It is intended to give regulatory authorities a clear and complete picture of:

  • The system’s intended purpose and functionality.
  • Its architecture, components and dependencies.
  • The development process, including data governance and model training.
  • Measures taken to ensure accuracy, robustness, cybersecurity and compliance with applicable standards.
  • The ongoing monitoring, control and risk-management processes.

This documentation is not just for regulators. It is also a practical tool for organisations, helping them ensure their AI systems meet both legal requirements and internal quality benchmarks.

By treating it as a living document rather than a static file, providers can better respond to changes in technology, regulation and real-world performance.

Key Content Areas for Technical Documentation (Annex IV)

For any HRAI system, the EU AI Act sets out a clear list of information that must be included in the technical documentation. These elements, detailed in Annex IV, ensure that the system’s design, operation and compliance can be assessed transparently by regulators. Below is a breakdown of the main content areas:

  1. General AI System Description.
  2. Design and Development.
  3. Monitoring, Functioning, and Control.
  4. Performance Metrics.
  5. Risk Management.
  6. Lifecycle Changes.
  7. Applicable Standards.
  8. Declaration of Conformity.
  9. Post-Market Monitoring.

Technical Documentation Format

When preparing the technical documentation for an HRAI system, the EU AI Act does not prescribe a specific format, which gives providers flexibility in how they compile and present the information. One option is to create a single, consolidated document containing all sections of Annex IV in sequence. This can work well for smaller organisations or simpler systems, as it ensures all relevant material is stored together and can be supplied to regulators in one package. The drawback is that updating individual sections, such as post-market monitoring records or lifecycle changes, may require versioning and reissuing the entire document, which can be administratively heavy.

Alternatively, the technical file can be prepared as a set of linked or referenced documents. In this approach, each section of Annex IV exists as a standalone file or module (for example, a separate “Risk Management” dossier, a “Design and Development” record, and a “Performance Metrics” report), all cross-referenced through an index or master document. This modular structure allows for easier updates and can integrate existing organisational documents, such as QMS records or design specifications, without rewriting them. However, it demands careful document control to ensure that every link remains current and that all referenced material is accessible and complete at the time of submission.

1. General AI System Description

This section serves as the introduction to the technical file and sets the foundation for the entire compliance assessment. It should provide a clear and structured account of the HRAI system, making it possible for a regulator or auditor to understand, at a glance, what the system is, why it exists, and the context in which it operates.

It should begin with the intended purpose of the system, naming the provider and specifying the system’s current version, including how it relates to any previous versions. If the system interacts with external hardware, software, or other AI systems not included in its core design, these connections should be clearly explained, along with any interoperability requirements.

The description should also set out the software or firmware versions relevant to the system and note any update requirements or compatibility considerations. It must describe all forms in which the system is placed on the market or put into service, such as embedded software within hardware products, downloadable packages, or access via APIs. Where relevant, it should include a description of the hardware on which the AI system is intended to run, along with photographs or illustrations showing the external features, markings, and internal layout of any products in which it is integrated.

A basic description of the user interface should be provided to show how deployers interact with the system, supported by a set of instructions for use. This ensures that regulators can understand not only the technical underpinnings of the system but also the practical way in which it is accessed, controlled, and maintained by its intended users. Together, these elements form the essential foundation for the rest of the technical documentation, framing how all subsequent details are to be interpreted.

This section should be written in a way that is accessible to non-specialist readers, while still providing enough technical detail to allow regulators to assess whether the system is being used in accordance with its intended design. Clarity here reduces the risk of misinterpretation later in the review process and helps ensure the rest of the technical documentation is evaluated in the right context.

2. Design and Development

This section of the technical file gives a complete account of how the HRAI system was designed, built, tested and secured. It should enable regulators to trace the system’s development from concept to deployment, and to understand the design logic, data foundations, oversight provisions and security safeguards in place.

Development Methods and Steps

The file should set out the methodologies used during development (e.g. agile, waterfall, hybrid), the sequence of development steps, and how different teams or providers contributed. If pre-trained models, datasets, or third-party AI tools were used, their origin, licensing, and integration process must be documented. This includes any modifications made to adapt them to the system’s intended purpose.

We could expect to have a Design and Development Plan including any frameworks used, third-party tools and other related resources.

Design Specifications

This part should explain the general logic of the system, its algorithms, and the rationale behind key design choices. It should include assumptions made about target users or groups, main classification or decision-making approaches, optimisation objectives, and the relative weighting of different parameters. The expected outputs should be defined along with quality benchmarks. Any trade-offs made to balance performance with compliance requirements under Chapter III, Section 2 should be recorded.

We recommend putting together a ‘Software Requirements Specifications’ list, or similar, to keep track of these requirements and set these off against the final product design in the verification stage.

This section may include:

  • Algorithm design description and logic diagrams.
  • Target user and use-case assumptions.
  • Optimisation goals and parameter weighting tables.
  • Output specification sheet.

System Architecture and Computational Resources

A clear description of the system architecture is needed, showing how software components interconnect, feed into one another, and integrate into the broader processing pipeline. This should also specify the computational resources used to train, validate, and operate the system, including hardware specifications and cloud infrastructure.

Data Requirements and Governance

Datasheets should be prepared for each dataset used in training, validation, and testing. This should cover provenance, scope, size, content type, and representativeness. Data acquisition methods, selection criteria, labelling processes, and data-cleaning methodologies (including outlier detection) must be described.

Human Oversight Assessment

An evaluation of the human oversight measures required under Article 14 should be included, detailing the technical features that allow deployers to interpret outputs as per Article 13(3)(d). This ensures the system’s outputs remain transparent and explainable to qualified human operators.

As an output, you could develop a ‘Human oversight strategy’ and develop user interface features that will enable interpretability. Furthermore, training materials to be delivered to deployers for oversight functions could also be put together at this stage.

Pre-Determined Changes

Where the system is designed to undergo pre-defined changes in behaviour or performance (e.g. retraining cycles, adaptive models), these must be described along with the safeguards ensuring ongoing compliance with Chapter III, Section 2 requirements. This section may include reference to internal change control procedures.

Validation and Testing Procedures

This section should set out how the system was validated and tested, specifying the datasets used, their characteristics, and the metrics applied for accuracy, robustness, bias detection, and compliance checks. Test logs should be dated, signed by responsible persons, and include results for both initial deployment and any pre-determined changes. Expected documents in this section may include:

  • Validation plan and test protocol documents.
  • Dataset summaries for validation and testing.
  • Accuracy, robustness, and bias test reports.
  • Test logs with signatures of responsible engineers/test leads.
  • Comparative benchmarking reports.

Cybersecurity Measures

All cybersecurity protections applied to the system should be detailed. This includes measures taken during development and those embedded in the operational system to protect against data breaches, model manipulation, adversarial attacks, and unauthorised access.

This section shall include or reference cybersecurity risk assessments, any penetration testing or vulnerability scan reports, as well as incident response procedures.

3. Monitoring, Functioning, and Control

This section describes how the HRAI system operates in practice, including its performance boundaries, oversight measures, and safeguards. It should give regulators and auditors a clear picture of what the system can do, what it cannot do, and how its functioning is monitored to ensure safe, lawful, and ethical operation.

The starting point is a description of the system’s capabilities and limitations, setting out the degrees of accuracy achieved for the specific persons or groups for which the system is intended. This should also indicate the overall expected level of accuracy in the context of its intended purpose, supported by quantitative test data. Any relevant subgroup accuracy metrics should be highlighted to show performance consistency across populations.

The file should then address foreseeable unintended outcomes and sources of risk to health and safety, fundamental rights, and non-discrimination. This includes, for example, potential misclassification risks, biased outputs, or operational errors that could have significant consequences for the affected individuals or communities. Where such risks are identified, the document should explain the mechanisms for detecting and mitigating them in real time or through post-processing.

A detailed account of the human oversight measures required under Article 14 must also be included. This should explain the technical features that enable deployers to interpret the AI system’s outputs, such as confidence scores, decision rationale displays, or alert systems. Oversight roles and escalation procedures should be clear, so that human intervention can occur effectively when needed.

Finally, the section should provide specifications on input data where relevant. This includes format requirements, acceptable ranges or thresholds for values, data validation rules, and how the system handles missing or anomalous inputs. Any input-related safeguards that prevent the system from operating outside safe or intended conditions should be documented.

4. Performance Metrics

This section provides the quantitative evidence of how the HRAI system performs against its intended purpose. It should present a clear and transparent record of the accuracy, robustness, resilience, and bias performance of the system, as well as any other relevant measures necessary for compliance with the EU AI Act.

The starting point is to document accuracy metrics. These should cover both the overall system performance and the performance across specific persons or groups the system is intended to be used on. Metrics should be supported by test results that reflect real-world conditions as closely as possible, including variations in data quality, environmental factors, or operational scenarios. Any differences in accuracy across subgroups should be explained, with steps taken to address or mitigate disparities.

The documentation should then outline robustness measures, describing how the system maintains performance under variable or challenging input conditions, such as noisy data, adversarial inputs, or partial information. Related to this is resilience, which refers to the system’s ability to recover or degrade safely when exposed to errors, unexpected inputs, or system failures.

Bias and fairness metrics should also be included where relevant, particularly for systems that affect access to services, decision-making, or rights. These could involve statistical parity, equal opportunity, or other domain-specific fairness measures. All metrics must be contextualised, explaining why they were chosen, how they were calculated, and how they align with the system’s intended purpose.

Finally, the section should document performance over time. This includes results from ongoing monitoring, post-market evaluations, and, if applicable, changes in performance following updates, retraining, or adaptation cycles. Providing a baseline for comparison allows regulators to verify that the system continues to meet the required thresholds throughout its lifecycle.

5. Risk Management and Mitigation

The risk management section is of critical importance. Here the provider records the risks to health, safety and fundamental rights identified during the design process, the steps taken to address them, and any residual risks that remain. This is not a one-off exercise: the Act expects these records to be updated as the system is deployed and monitored in the real world.

The Risk Management File may include:

  • Risk Management Plan.
  • Risk Assessment.
  • Risk Controls.
  • Risk Management Report.

Each identified risk should be matched to a mitigation strategy. These may involve technical solutions, such as fail-safes or anomaly detection; operational measures, such as restricting access or requiring specific operator training; and procedural safeguards, such as independent audits. The file should explain how these strategies reduce risk to acceptable levels and how residual risks have been justified.

For Risk Assessment, we typically employ a Failure Modes and Effects Analysis (FMEA) approach to assess initial risk and final residual risk following risk control. The following standards and frameworks may be taken into consideration:

  • ISO 31000 – Risk management – Guidelines.
  • ISO/IEC 23894 – Information technology — Artificial intelligence — Guidance on risk management
  • NIST AI Risk Management Framework

6. Lifecycle Changes

This section records all modifications to the HRAI system after its initial development and deployment, ensuring that regulators can track how the system evolves over its operational life. The aim is to demonstrate that the system remains compliant with the Act despite updates, enhancements, or other changes.

Lifecycle change management should begin with a change control policy that defines what constitutes a significant change, who is authorised to approve it, and how changes are documented. Significant changes might include updates to algorithms, modifications to training data, hardware replacements, adjustments to decision thresholds, or the introduction of new system functionalities. For adaptive or continuously learning systems, the scope must also cover planned retraining cycles or automatic parameter adjustments.

For each change, the file should record its nature, rationale, and impact. This includes the objectives of the change, the parts of the system affected, and the expected outcomes. The documentation should also include any new risk assessments, performance re-validations, and mitigation steps needed to maintain compliance. Where changes alter the system’s intended purpose, target users, or Annex III classification, this must be explicitly noted and justified.

The section should also cover traceability and version control. Every change should have a unique identifier, a date, a description, the person or team responsible, and links to associated technical documents such as updated design specifications, retraining datasets, or revised user instructions. Where possible, an audit trail should be kept to demonstrate the exact state of the system at any point in its lifecycle.

7. Applicable Standards

This section lists all the harmonised standards applied in the design, development, testing, and maintenance of the HRAI system, as recognised under the EU AI Act. Where harmonised standards are not available or not fully applied, the section should reference other relevant EU, international, or sector-specific technical specifications that have been used to demonstrate compliance. The purpose is to give regulators a transparent map between the system’s compliance obligations and the external frameworks used to meet them.

The list of applicable standards will typically:

  • Identify the full title and reference (including year/version).
  • Describe how the standard was applied to the system or its components.
  • Note any deviations or partial applications, along with justification.

The section should also explain whether the standard was applied in full, partially, or replaced by alternative compliance measures.

Currently, there aren’t any standards that have been harmonised in relation to the EU AI Act, however, we can expect there to be in the future. The use of harmonised standards allows for regulators to presume conformity in relation to certain sections. For example, if ISO 23894 had to be harmonised, applying this and claiming conformity with the standard will allow regulators to presume conformity with the AI Act’s requirements for risk management.

8. Declaration of Conformity

The Declaration of Conformity is the formal statement in which the provider affirms that the HRAI system meets all applicable requirements set out in the EU AI Act. It serves as the legal confirmation that the technical documentation is complete, accurate, and demonstrates compliance Act.

The HRAI System Declaration of Conformity (DoC) is required by Article 47 and its content is laid out in Annex V of the Act.

9. Post-Market Monitoring

This section describes the framework for monitoring the HRAI system after it has been placed on the market or put into service. Its purpose is to ensure that the system continues to meet the requirements of the EU AI Act throughout its operational life, including detecting new risks, performance degradation, or emerging compliance issues.

The Post-Market Monitoring (PMM) plan should begin with a clear statement of objectives, such as verifying ongoing compliance, assessing performance stability, identifying unintended behaviours, and updating risk assessments in light of real-world conditions. It must also set out data collection methods for operational monitoring, including both automated logs and feedback from deployers or affected individuals. Where feasible, this should include monitoring for subgroup performance differences, bias, or discriminatory impacts that were not detected in pre-market testing.

The plan must outline frequency and triggers for reviews. Routine checks could be carried out on a scheduled basis (e.g. quarterly or annually), while unscheduled reviews should be triggered by specific events, such as system failures, regulatory notifications, significant updates, or incidents affecting health, safety, or fundamental rights. Each review should result in an updated compliance report, including any corrective actions taken.

The section should also detail the incident response process. This includes procedures for logging, assessing, and resolving incidents, escalation protocols, and reporting obligations to competent authorities as required under the Act.

Where relevant, the PMM framework should integrate with the provider’s Quality Management System (QMS) to ensure systematic handling, traceability, and accountability.

Related Obligations

While technical documentation is a core compliance requirement for HRAI systems under the EU AI Act, it does not stand alone. Several related obligations directly influence how the documentation is created, updated, and used. Understanding these connections is essential for building a compliance strategy that is both effective and sustainable.

Automatic Logging / Record-Keeping

Article 12 of the EU AI Act requires HRAI systems to automatically record events throughout their lifecycle to ensure traceability. These logs provide a factual record of how the system has operated over time, supporting both internal audits and regulatory oversight. Logs may include system inputs and outputs, operator actions, system state changes, and error or failure events. Properly structured logs help verify compliance, diagnose incidents, and feed into post-market monitoring activities.

Quality Management Systems (QMS)

Article 17 mandates that providers operate a QMS that integrates compliance, risk management, documentation control, and adherence to applicable technical standards. A robust QMS ensures the processes for updating technical documentation, managing lifecycle changes, and maintaining performance are systematic and auditable.

Best-Practice

Well-structured, clearly written documentation not only improves regulatory review but also builds trust with stakeholders. Taking into account our experience with EU regulators in other sectors such as the medical device industry, we’ve come up with some best-practice measures to adopt when putting together your technical documentation.

Readability

Break the content into short, clearly labelled sections that mirror the structure of Annex IV. Use concise headings to help readers navigate, and bullet points or numbered lists for dense information, such as system specifications or test protocols. Where complex concepts are unavoidable, include plain-language explanations alongside technical detail to make the documentation accessible to both regulators and non-technical readers.

Tone

Maintain a professional tone that communicates confidence and competence. Use active voice and moderate-length sentences to keep the text direct and engaging. Avoid unnecessary jargon; when technical terms are essential, define them on first use. Regulators will expect precision, while broader audiences will appreciate clarity, striking this balance is essential.

Consistency

Ensure all sections follow a consistent structure and formatting style, including how headings are capitalised, how documents are referenced, and how dates, version numbers, and units are presented. A consistent style not only improves the user experience but also supports internal version control and cross-referencing.

Searchability

Make the documentation easy to find both internally and externally. For internal teams, maintain a digital index with hyperlinks to each section of the technical file, ensuring version-controlled documents are always accessible. For public or client-facing materials, use clear, descriptive file names, structured URLs, and embedded metadata so search engines can index the content effectively. Where possible, provide a search bar or filter function in online portals so users can quickly locate relevant sections or keywords.

Conclusion

Technical documentation for HRAI systems under the EU AI Act is not simply an administrative exercise. When developed in line with Annex IV, it becomes the backbone of compliance, ensuring transparency, accountability, and regulatory readiness. By linking it with related obligations such as automatic logging, risk management, QMS integration, and conformity assessment, providers create a living framework that supports safe and lawful operation throughout the system’s lifecycle.

For both established companies and SMEs, thorough documentation is also a strategic asset. It strengthens governance, improves oversight, and builds trust with regulators, clients, and end users alike. Rather than treating Annex IV as a checklist, providers should view it as a roadmap for embedding quality and responsibility into AI development from the start, enabling innovation that is both compliant and credible.

How Blue Arrow can Assist

At Blue Arrow, we make EU AI Act Annex IV compliance for High-Risk AI systems straightforward. Combining deep regulatory expertise with practical, results-driven methods, we map every requirement to your system’s design, development, and operations. From creating regulator-ready technical files to setting up post-market monitoring and lifecycle change controls, we handle the heavy lifting and integrate seamlessly with your existing workflows, ensuring compliance is both achieved and sustained.

Build your EU AIA-compliant technical documentation today!

Drawing on experience in highly regulated sectors like medical devices, we apply proven methodologies to anticipate regulatory expectations and withstand the highest scrutiny. Whether you need a one-off documentation project or end-to-end compliance support, our tailored approach reduces risk, saves time, and builds confidence with regulators, clients, and stakeholders. With us, your Annex IV documentation becomes a strategic asset for governance, transparency, and trust, starting with a free consultation and a detailed AI Compliance Strategy specific to your product.