Aguardic logoAguardic

EU AI Act 2026: What AI Vendors Need to Know Before August

Full enforcement for high-risk AI systems begins August 2, 2026. Here's what's already in effect, what's coming, who it applies to, and what you should be doing right now to prepare.

Aguardic Team·February 24, 2026·9 min read

The EU AI Act is the most consequential AI regulation in the world, and its most impactful phase is six months away. Full enforcement for high-risk AI systems begins August 2, 2026. If you're building AI products that serve EU customers — or that could be deployed by EU customers even if you're based elsewhere — this deadline applies to you.

The fines are not theoretical: up to €35 million or 7% of global annual turnover, whichever is higher. For a company doing $50 million in revenue, that's $3.5 million at risk. For a billion-dollar company, it's $70 million.

This guide covers what's already in effect, what's coming in August, who it applies to, and what you should be doing right now to prepare.

What's Already in Effect

The EU AI Act didn't start in August 2026. Key provisions have been rolling in since early 2025.

Prohibited AI practices (effective February 2, 2025). Certain AI uses are banned outright across the EU: social scoring systems by governments, real-time biometric identification in public spaces (with narrow exceptions), AI that manipulates people through subliminal or deceptive techniques, and systems that exploit vulnerabilities based on age, disability, or social situation. If your product touches any of these areas, you should already be in compliance.

General-purpose AI model obligations (effective August 2, 2025). Providers of general-purpose AI models — the foundation models that other products are built on — face transparency requirements including technical documentation, copyright compliance information, and a summary of training data content. This primarily affects model providers (OpenAI, Anthropic, Google, Meta) rather than companies building applications on top of their models. However, if you're fine-tuning or significantly modifying a GPAI model, you may inherit some provider obligations.

AI literacy requirements (effective February 2, 2025). Organizations deploying AI systems must ensure their staff have sufficient AI literacy to understand the systems they're using. This is broadly applicable and often overlooked.

What's Coming August 2, 2026

The August deadline is when the regulation's most operationally demanding requirements take effect — the obligations for high-risk AI systems.

High-Risk AI Classification

An AI system is classified as high-risk if it falls into specific categories defined in Annex III of the regulation. The categories most likely to affect AI vendors include:

Biometric and identity systems. Remote biometric identification, emotion recognition in workplaces or education, and biometric categorization based on sensitive attributes.

Critical infrastructure management. AI used in managing road traffic safety, water, gas, heating, or electricity supply.

Education and vocational training. AI that determines access to education, evaluates learning outcomes, or monitors students (including proctoring and cheating detection).

Employment and worker management. AI used in recruitment, job application filtering, performance evaluation, promotion decisions, task allocation, or monitoring worker behavior.

Essential services access. AI used to evaluate creditworthiness, set insurance premiums, evaluate emergency service requests, or assess eligibility for public assistance.

Law enforcement and border control. AI used in crime analytics, polygraph-adjacent systems, evidence reliability assessment, profiling for crime prediction, or migration and asylum processing.

Justice and democratic processes. AI used by judicial authorities to research and interpret facts, and systems intended to influence voting behavior.

If your AI product assists with any of these functions — even as a component or module that an enterprise customer integrates into a high-risk system — you may be in scope.

Obligations for High-Risk AI Systems

If your system qualifies as high-risk, here's what you're required to have in place by August:

Risk management system. A documented, ongoing process for identifying, analyzing, evaluating, and mitigating risks throughout the AI system's lifecycle. This isn't a one-time assessment — it's continuous risk management with documented updates.

Data governance. Documented practices for training, validation, and testing data — including data quality criteria, bias examination, and gap identification. If your model was trained on data with known limitations, those limitations must be documented.

Technical documentation. Comprehensive documentation that demonstrates compliance before the system is placed on the market. This includes the system's intended purpose, design specifications, risk management procedures, and the results of conformity assessments.

Record-keeping and logging. Automatic logging of events throughout the system's lifecycle, with logs retained for an appropriate period. The logs must enable monitoring of the system's operation and facilitate post-market monitoring. For AI governance purposes, this means evaluation records, violation logs, and resolution histories — kept for audit purposes.

Transparency and user instructions. Clear instructions for downstream deployers that include the system's intended purpose, level of accuracy, known limitations, and the human oversight measures needed to use it safely.

Human oversight. Designed to allow effective oversight by humans during use. This includes the ability to fully understand the system's capabilities and limitations, correctly interpret its outputs, decide not to use it or override its output, and intervene or stop the system.

Accuracy, robustness, and cybersecurity. Appropriate levels of accuracy, robustness, and cybersecurity, documented and maintained throughout the system's lifecycle.

Conformity assessment. Before placing a high-risk system on the EU market, you must conduct a conformity assessment demonstrating compliance with all applicable requirements. For most categories, this is a self-assessment by the provider. For biometric identification and critical infrastructure, it requires a third-party assessment.

The European Commission Digital Omnibus Proposal

It's worth noting that the European Commission proposed the Digital Omnibus package in late 2025, which among other things would potentially adjust some EU AI Act timelines and implementation details. As of early 2026, this proposal is still working through the legislative process and has not modified the August 2026 enforcement date for high-risk systems. Monitor this closely — but don't use the possibility of delays as a reason to postpone preparation.

Who This Actually Affects

The EU AI Act's scope extends beyond companies headquartered in the EU.

Providers. If you develop an AI system or have one developed for you, and place it on the EU market or put it into service in the EU — regardless of where you're established — you're a provider subject to the full set of obligations.

Deployers. If you use an AI system under your authority in the EU, you're a deployer with your own set of obligations (even if the provider is outside the EU).

Importers and distributors. If you bring AI systems into the EU market or make them available, you have verification and compliance obligations.

The key implication for US-based AI vendors: If your product is used by EU customers, or if your enterprise customers deploy your product for EU end-users, you are likely in scope. "We're a US company" is not a defense.

What You Should Be Doing Now

Conduct an AI System Inventory

Map every AI system your organization provides or deploys. For each system, document its intended purpose, the categories of decisions it influences, the data it processes, and the geographic scope of its deployment. Cross-reference against the Annex III high-risk categories to determine which systems are in scope.

This sounds basic, but most companies don't have a comprehensive AI system inventory. You can't assess compliance for systems you haven't cataloged.

Perform a Gap Assessment

For each high-risk system, evaluate your current posture against the August requirements: risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy/robustness, and conformity assessment. Identify specific gaps that need to be closed before August.

The most common gaps for AI vendors are: insufficient logging and record-keeping (systems that don't retain evaluation or decision records), incomplete technical documentation (no formal description of the system's design, purpose, and limitations), and absence of continuous risk management (one-time assessments rather than ongoing processes).

Build Your Logging and Monitoring Infrastructure

Of all the requirements, logging and record-keeping is the most operationally demanding and the hardest to retrofit. The regulation requires automatic logging of events that enables monitoring of system operation. Bolting this onto an existing system after the fact is significantly harder than building it in.

At minimum, you need to log every AI system output or decision, what inputs were provided, what policies or rules were applied, and what the outcome was. These logs need to be retained, searchable, and exportable for regulatory review. If you have a governance platform generating evaluation records and violation logs, you're already building the evidence base the EU AI Act requires.

Prepare Your Technical Documentation

The regulation requires technical documentation that includes general description, detailed description of system elements, development process documentation, monitoring and testing procedures, and applicable standards. Start drafting this now — it's not something you write in a weekend.

For AI vendors, the technical documentation should cover your model selection rationale, training and evaluation data descriptions, accuracy metrics and known limitations, the governance policies enforced on system outputs, and the human oversight mechanisms available to deployers.

Implement Human Oversight Mechanisms

High-risk AI systems must be designed so that humans can effectively oversee them during use. This means deployers need the ability to understand the system's outputs, override or stop the system, and intervene in individual decisions.

For product design, this means building in human review workflows, override capabilities, and clear output explanations. For governance, it means having a review queue for edge cases and a process for human judgment on outputs the system flags as uncertain.

Consider ISO 42001 Alignment

ISO 42001 is the international standard for AI management systems. While not required by the EU AI Act, it provides a structured framework for meeting many of the Act's requirements — particularly risk management, documentation, and continuous improvement. Organizations that align with ISO 42001 will find the EU AI Act conformity assessment significantly easier.

The standard is still gaining adoption, which means early alignment is a competitive differentiator. Being able to tell an EU enterprise customer "our AI management system is aligned with ISO 42001" provides credibility that a generic compliance claim doesn't.

The Competitive Angle

For AI vendors selling into EU markets — or selling to companies that serve EU markets — EU AI Act compliance is becoming a sales requirement, not just a regulatory obligation. EU enterprise procurement teams are already incorporating AI Act requirements into vendor assessments.

The companies that can demonstrate compliance with structured evidence — risk assessments, logging infrastructure, governance policies, technical documentation — will close EU deals that competitors can't. The companies that scramble after August will face both regulatory risk and competitive disadvantage.

Six months is enough time to prepare if you start now. It's not enough time if you start in June.

Enjoyed this post?

Subscribe to get the latest on AI governance, compliance automation, and policy-as-code delivered to your inbox.

Ready to Govern Your AI?

Start enforcing policies across code, AI outputs, and documents in minutes.