
Introduction
The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) is the world’s first comprehensive law regulating the development, deployment and use of artificial intelligence. Entering into force on 1 August 2024, it introduces a tiered, risk-based framework that places the heaviest obligations on High-Risk AI (HRAI) systems.
For the insurance sector, this is more than a compliance exercise. Insurers increasingly rely on AI to assess risk, calculate premiums, detect fraud and personalise products. Many of these systems may fall into the high-risk category defined by the Act, triggering stringent requirements for both the providers who create these systems and the deployers who use them in day-to-day operations.
The Act’s impact will be felt across underwriting, claims management and customer engagement. Insurers will need to review their AI models, strengthen governance and ensure their teams understand new legal responsibilities. This shift is not just about avoiding penalties; it is about building trust with customers and regulators in an environment where AI’s influence on people’s lives is under increasing scrutiny.
What Counts as High-Risk in Insurance?
Under the EU AI Act, an HRAI system is one that poses a significant risk to people’s health, safety or fundamental rights. The regulation sets out detailed categories in Annex III, and insurance appears explicitly within this list. See Annex III, 5(b,c).
For insurers, the most relevant category covers AI systems intended to evaluate the creditworthiness or establish the credit score of natural persons, as well as systems used to assess and price risk in relation to life and health insurance. This includes models that calculate premiums, determine eligibility, or automate significant decisions about coverage.
Being classified as high-risk triggers a set of strict compliance requirements. The classification is not optional, but, in accordance with Article 6(4), providers can document and justify why a system that technically fits an Annex III description should not be considered high-risk, for example if its output does not significantly influence individual decisions. This justification must be carefully prepared, as it may be scrutinised by regulators.
In practice, many AI tools used for underwriting, claims scoring and customer segmentation will be high-risk. Insurers therefore need to identify which systems are in scope and prioritise them for compliance planning.
Extra-Territorial Reach
The EU AI Act is not limited to organisations physically based within the European Union. Its scope is deliberately broad, applying to:
- Providers that place AI systems on the EU market, regardless of where they are established.
- Deployers who use AI systems within the EU.
- Providers and deployers outside the EU whose systems’ output is used in the EU.
For the insurance sector, this means that a company developing underwriting or claims automation software in another jurisdiction, for example, the United States or the UK, will still be subject to the Act if it sells to EU insurers or if its AI outputs are used in EU-based operations. The same applies to multinational insurance groups operating across both EU and non-EU markets.
Where a provider is not established in the EU, the Act requires them to appoint an authorised representative located in the EU. This representative acts as the primary point of contact for EU regulators, holding copies of technical documentation and ensuring cooperation during investigations or audits.
Obligations for Providers of High-Risk AI Systems
In the language of the EU AI Act, a provider is any organisation that develops a high-risk AI system and places it on the market under its own name or trademark. In insurance, this could be a specialist technology vendor, an insurtech start-up, or even an insurer that builds a proprietary model for use by other firms.
The Act sets out a demanding set of expectations for providers in Article 16. Before a HRAI system can be made available in the EU, it must go through a conformity assessment to confirm it meets all applicable requirements. This is not a single moment in time but part of a broader risk management process that runs throughout the system’s lifecycle. Providers are expected to identify and mitigate risks early in development, test for accuracy and robustness, and design systems with human oversight in mind.
Transparency is a recurring theme. Providers must supply clear instructions for use, explaining what the system does, its intended purpose, and its limitations. These instructions will form the foundation for how deployers integrate the AI into their own processes. The law also requires the creation of detailed technical documentation that regulators can review if questions arise.
Once a system is live, the provider’s responsibilities do not end. They must monitor its performance, record relevant operational data, and report serious incidents to competent authorities. All high-risk AI systems must be registered in a public EU database, ensuring that regulators and the public know what is in use and by whom.
For organisations in the insurance technology space, meeting these requirements will mean investing in governance, quality assurance and legal expertise. Those who do it well can use compliance as a differentiator, positioning themselves as trusted suppliers in a market where regulatory scrutiny is only set to increase.
Technical documentation
All HRAI systems must be accompanied by detailed, structured documentation in accordance with Article 11 that explains how the system works and demonstrates compliance with the Act’s requirements. This goes far beyond a user manual. It includes the intended purpose, design specifications, data sources, training processes, testing results, and known limitations.
In insurance, this level of transparency can be challenging where proprietary models or sensitive data are involved, but it is essential for enabling regulators (and deployers) to understand the system’s capabilities and risks.
Take a look at our AI Regulatory Services.
Quality management system (QMS)
Providers must operate a documented quality management system that covers all processes involved in developing and maintaining the AI system, as laid out in Article 17. A QMS under the AI Act is not a purely technical standard; it must integrate legal compliance, risk management, and post-market monitoring. For insurance-related AI, this means embedding quality checks into data preparation, model validation, change management, and version control, ensuring that every update is consistent with regulatory requirements.
Find out how Blue Arrow can assist with our various AI Governance services.
Automatically generated logs
High-risk AI systems must be capable of generating and storing logs automatically during operation. These logs should capture events relevant to compliance, performance and safety, making it possible to reconstruct how a system reached a particular decision. In insurance applications such as claims automation or premium calculation, logs are invaluable for investigating disputes, identifying errors, and detecting patterns of bias. The Act requires that these logs be kept for a period proportionate to the AI’s intended purpose, which means providers must design storage and retrieval processes from the outset.
Conformity assessment
Before a HRAI system can be placed on the market, it must undergo a conformity assessment to verify that it meets all applicable legal requirements. For many systems, this will be a self-assessment following the procedures set out in Annex VI of the Act, but certain systems may require the involvement of a notified body. In the insurance sector, the conformity assessment will need to account for sector-specific risks like discrimination in underwriting or unfair claims handling, and it must be repeated if the system undergoes substantial modifications.
Obligations for Deployers (Insurers)
In the EU AI Act, deployers are organisations that use an AI system under their authority in the course of their professional activities. For insurance companies, this means any high-risk AI used for underwriting, claims handling, fraud detection, or customer profiling will trigger specific legal responsibilities.
Deployer obligations are laid out in Article 26 of the Act, however, they can be interpreted as the below specifically for insurers.
Using the system in accordance with instructions
For insurers, this means configuring the AI model exactly as recommended by the provider, applying updates, and not altering risk parameters beyond documented tolerances.
In practice, this could limit ad hoc adjustments to underwriting rules, premium calculation factors, or claims triage processes unless such changes are formally approved. Deviating from instructions, for instance, by feeding untested risk indicators into the model, could expose the insurer to compliance breaches and liability if the output is found to be biased or inaccurate.
Assigning human oversight
In the insurance sector, human oversight will require trained underwriters, claims handlers, or compliance officers to monitor AI-driven recommendations and decisions. These individuals will need both insurance domain expertise and enough AI literacy to detect anomalies, question outputs, and override automated results when necessary.
For example, a claims handler might spot when the AI disproportionately rejects claims from a particular demographic group and intervene before the decision is finalised. The obligation could also trigger investment in training programmes that bridge the gap between insurance regulation and AI system operation.
In addition, human oversight should incorporate a clear reporting process so that any anomalies, suspected biases, or operational concerns identified by staff are promptly escalated to the appropriate compliance or risk teams. These reports should be documented and tracked to ensure that identified issues lead to corrective actions, and, where necessary, are communicated to the AI provider or relevant regulators. This structured approach ensures that oversight is not only reactive but also forms part of a continuous improvement loop for the AI system’s performance and compliance.
Insurance companies will be free to decide how oversight is implemented, for instance, whether to embed AI monitoring within underwriting teams or create a dedicated “AI governance unit” within compliance. However, they cannot use this flexibility to dilute oversight; it must still achieve the level of scrutiny envisaged by the AI Act, even if it means reorganising workflows and responsibilities across departments.
Ensuring relevant and representative input data
Where insurers supply the data used by the AI, such as customer application forms, claims histories, or telematics, they must ensure it is relevant, accurate, and representative of the customer base. Inaccurate or skewed data could result in unfair pricing or discriminatory claim outcomes.
For example, if the AI is trained mostly on urban policyholders, rural customers might receive less accurate risk assessments. This obligation could require insurers to perform regular dataset audits, ensure proper sampling, and document data governance procedures.
Monitoring operation and reporting risks or incidents
Insurance companies must actively track the system’s outputs, not just set it and forget it. If, for instance, the AI system begins denying a surge of valid claims due to a software update or misclassification, the insurer must suspend its use and notify the provider, relevant distributors, and authorities without delay.
In an industry where customer trust is critical, such suspensions could carry significant reputational risk. The process for identifying, escalating, and reporting risks or “serious incidents” will need to be embedded into the insurer’s operational risk framework.
Log retention
Logs generated by the AI system must be retained for at least six months, but insurers will likely need to store them longer to cover policy and claims disputes, which can arise years later. These logs are vital for demonstrating why a claim was approved or denied, or why a premium was set at a particular level. Storage policies must also comply with GDPR, meaning personal data within logs must be protected while still remaining accessible for audits or litigation.
Informing employees before workplace use
If AI is introduced into underwriting, claims processing, or sales environments, insurers must inform affected staff and worker representatives. For example, if a call centre now uses AI to rank customer leads or guide claim authorisations, employees must be told how it works and what impact it will have on their role. This is particularly important in an industry where job functions could shift significantly with AI deployment.
Registration obligations for public insurers
For state-backed insurance bodies or public-sector reinsurers, the AI system must be registered in the EU database before use. If not registered, the system cannot be deployed. This adds a procurement-stage compliance check, meaning public insurers must validate registration before signing contracts with AI providers.
Using Article 13 information for DPIAs
Insurance AI systems process sensitive personal and financial data, sometimes including health information. Insurers will need to incorporate provider-supplied technical and risk information into their GDPR Data Protection Impact Assessments (DPIAs). This is particularly relevant for systems that automatically evaluate claims or price premiums, where profiling and automated decision-making rules under GDPR are triggered.
Restrictions on biometric identification
Although less common in traditional insurance, some fraud detection systems use facial recognition or biometric matching to verify identity. If such systems are classified as post-remote biometric identification under the AI Act, insurers must follow strict authorisation processes before use. While this is more relevant to law enforcement, its presence in anti-fraud workflows could impose unexpected compliance hurdles and make biometric-based fraud checks more costly and slower to implement.
Informing customers subject to AI decisions
If the AI system is used to decide on policy acceptance, premium levels, or claims approval, customers must be informed that an AI is involved. This ties closely to GDPR’s transparency requirements and may require insurers to explain, in simple terms, how the AI works and what factors influenced the decision. Clear communication will be key to maintaining customer trust and avoiding disputes.
Cooperating with competent authorities
Finally, insurers must be ready to share information, logs, and system output records with supervisory bodies such as data protection authorities or financial regulators. This cooperation could occur during a compliance inspection or after a complaint from a customer. In practice, insurers will need documented protocols for regulator engagement and a clear chain of custody for system records to ensure nothing is lost or altered during disclosure.
Fundamental Rights Impact Assessment
The Fundamental Rights Impact Assessment (FRIA) is a legal requirement under Article 27 of the EU AI Act for all deployers of high-risk AI systems. In insurance, many core applications such as underwriting, claims automation, and eligibility assessments for life or health policies, will fall into the high-risk category and therefore require a FRIA.
When the FRIA is required
A FRIA must be completed before an HRAI system is put into use for the first time. The obligation only applies where the system meets the Act’s definition of high-risk, particularly those listed in Annex III.
The FRIA must be a written, structured document covering at least:
- Description of processes: A clear explanation of the insurer’s business processes in which the AI system will be used, ensuring alignment with the intended purpose declared by the provider.
- Usage period and frequency: Details on how often and for how long the AI system will be used, for example, whether it operates continuously in underwriting workflows or periodically during renewal cycles.
- Categories of persons affected: Identification of the categories or groups of individuals likely to be affected in the specific insurance context, such as applicants for life insurance, existing health policyholders, or specific demographic segments.
- Risks of harm: An analysis of potential harms to those categories, drawing on the provider’s information under Article 13. This could include discriminatory pricing, denial of essential coverage, privacy breaches, or reputational harm.
- Human oversight measures: A description of how human oversight will be implemented in practice, in line with the provider’s instructions for use. This should include the qualifications of overseeing staff and the intervention process.
- Risk mitigation and governance: Details of measures to be taken if identified risks materialise. This must cover internal governance arrangements, escalation procedures, and accessible complaint mechanisms for affected individuals.
Insurers should maintain a clear inventory of all AI systems with their classification under the Act, integrate Fundamental Rights Impact Assessment preparation into product development and model deployment processes, and use cross-functional review teams, combining compliance, underwriting, data science, and legal expertise, to ensure completeness.
Supervision and Coordination with Insurance Regulators
The EU AI Act designates national competent authorities as the primary bodies responsible for monitoring compliance with the regulation. For HRAI systems, market surveillance authorities will:
- Verify that providers and deployers meet their obligations, including conformity assessments, FRIA completion, and post-market monitoring.
- Investigate complaints or incidents related to AI systems.
- Coordinate enforcement actions where breaches occur.
Coordination mechanisms will be important because many insurers operate across borders. The Act includes provisions for:
- Information sharing between supervisory authorities in different Member States.
- Avoiding regulatory duplication by aligning AI compliance checks with existing supervisory processes, such as product oversight reviews and risk management assessments.
- Leveraging sector expertise, so that AI-specific risks are understood in the context of insurance business models.
For insurers, this means AI compliance will not be managed in isolation. Instead, it will be assessed alongside other regulatory obligations, creating an opportunity to streamline reporting and governance, provided AI risk management is integrated into the wider compliance framework.
Practical Impacts on Insurers
The EU AI Act will influence insurers in two distinct roles: as providers (designing and placing AI systems on the market) and as deployers (using AI systems in their operations). The practical implications differ depending on which role an organisation plays (some insurers will be both).
One of the most immediate changes will be the increased compliance effort. High-risk systems must be documented in detail, assessed for conformity before they are deployed, and monitored once in use. These are not box-ticking exercises but ongoing governance tasks that need to be embedded into normal business processes.
The Act also places new emphasis on operational integration. Risk management, human oversight and incident reporting cannot be isolated compliance functions; they must sit within underwriting workflows, claims handling procedures and customer-facing services. This means legal, compliance, data science and business teams will have to work more closely than before.
For insurers relying on external AI providers, contractual arrangements will need to evolve. It will no longer be enough to buy in technology on the basis of performance claims alone. Contracts should cover compliance responsibilities, allow access to technical documentation, and set out clear escalation routes for dealing with incidents or model updates.
Managing the risk of bias will remain a central concern. In life and health insurance, for example, unfair pricing or denial of coverage could not only breach the Act but also damage public trust. Ongoing monitoring, backed by a willingness to adjust or withdraw models, will be essential to maintaining fairness.
These regulatory changes will also demand a shift in skills and culture. Staff will need to understand the capabilities and limits of AI systems, be able to interpret outputs, and feel confident intervening when needed. This is as much about creating a culture of oversight as it is about meeting formal AI literacy requirements.
Finally, the Act is likely to influence technology strategy. Some insurers may decide to limit their use of high-risk AI to reduce compliance exposure, while others will invest in advanced systems and build the governance capacity to support them. In either case, responsible and transparent AI use will become a marker of market credibility.
AI Literacy
The EU AI Act already requires deployers of AI systems to ensure relevant staff have an appropriate level of AI literacy. This applies now, not just when the high-risk provisions take effect in 2026. In insurance, it means underwriters, claims handlers, compliance teams and senior managers must understand how their AI systems work, their capabilities, limitations and potential risks. While no formal testing is required, regulators may expect evidence of training. An understanding of the EU AI Act will also be key to demonstrating AI literacy.
Addressing AI literacy early will strengthen human oversight, support ethical decision-making and help insurers prepare for the more demanding obligations ahead.
See how we can help upskill your workforce.
Timelines
The EU AI Act formally entered into force on 1 August 2024, twenty days after its publication in the Official Journal. However, most of its obligations only become applicable later, following a structured, phased timeline.
The landmark deadlines for high‑risk AI systems (those most relevant to insurance) fall on 2 August 2026. From that date, providers and deployers must fully comply with obligations like FRIA, conformity assessments, logging, and post‑market monitoring.
Insurers should therefore treat these dates not as distant checkpoints but as immediate priorities. Planning, gap analysis, and compliance roadmap development need to happen now to ensure readiness by August 2026.
Conclusion
The EU AI Act marks a decisive shift in how artificial intelligence will be governed in Europe, and the insurance sector is directly in its path. By defining clear obligations for both providers and deployers of high-risk AI systems, the regulation forces a move away from informal governance towards structured, documented, and transparent practices.
For providers, this means building compliance into every stage of system design, from quality management and technical documentation to automated logging and conformity assessment. For deployers, it demands careful evaluation of how AI is integrated into underwriting, claims handling, and customer interactions, with the Fundamental Rights Impact Assessment acting as a new gatekeeper for high-risk applications.
While the compliance burden is significant, it also creates an opportunity. Insurers that invest early in AI governance will not only reduce regulatory risk but also strengthen trust among customers, partners and regulators. In a market where transparency, fairness and accountability are becoming competitive differentiators, the ability to demonstrate responsible AI use could be as valuable as the technology itself.
The message is clear: insurers cannot treat the EU AI Act as a one-off legal hurdle. It is a framework for how AI will be developed and used in Europe for years to come. Those who adapt quickly and integrate these requirements into their everyday operations will be better placed to innovate with confidence, and to thrive in a market that is being reshaped by both regulation and technology.
How Blue Arrow Can Assist You
Navigating the EU AI Act is complex, especially for insurers dealing with high-risk AI systems and cross-border operations. At Blue Arrow, we specialise in helping organisations interpret the regulation, assess their exposure, and implement proportionate, effective compliance measures.
Our services include:
- AI compliance gap analysis to identify where current processes fall short of the Act’s requirements.
- High-risk system classification and advice.
- FRIA preparation and review, ensuring assessments meet Article 27 requirements and align with insurance-specific risks.
- Governance and documentation frameworks for providers and deployers, including ISO 42001 compliance.
- Technical documentation preparation.
- Quality Management System implementation.
- Training and AI literacy programmes tailored to underwriting, claims handling, compliance, and leadership teams.
- Vendor and third-party risk management to ensure suppliers meet their own obligations under the Act.
- EU Authorised Representative service.
Whether you are developing AI in-house, sourcing from external providers, or operating in multiple jurisdictions, Blue Arrow can help you build compliance into your business model without slowing innovation. Our approach is practical, sector-specific, and designed to give you confidence ahead of each key enforcement date.