Skip to content
Blog

90 Days to Compliance: Are You Ready for the Next Phase of the EU AI Act?

John Waller | Cloud & Security Practice Lead

Contributed by John Waller – Cloud Security Practice Lead, AST at UltraViolet Cyber

The EU AI Act is not a future obligation, it is a present one, and the next hard deadline is ninety days away. By August 2, 2026, all high-risk AI systems operating within EU jurisdiction must demonstrate compliance. Not document it, not plan for it – demonstrate it. Any organization with employees, business operations, or software sold in the EU falls within scope.

Most organizations have begun some form of assessment, yet as the deadline closes, what most have assessed is the visible surface: the externally facing systems, the customer-service bots, the procurement tools that came with a vendor's compliance checkbox. What they have not assessed, fully, are the systems they built or configured themselves, deployed for internal use, and quietly embedded into workflows that shape decisions about people. That gap is where August 2 will find them.


High-Risk AI Is Not Just What You Ship – It Is What You Run

The EU AI Act's high-risk classification is not limited to products sold to consumers. It reaches internal operational systems when those systems make or materially influence decisions that affect people's livelihoods, financial access, or working conditions. Hiring and promotion algorithms, employee performance monitoring tools, credit scoring engines, automated loan approval workflows, and AI-driven customer decision systems all carry high-risk classification under the Act – regardless of whether they originated from a vendor or were built in-house.

Unfortunately, many organizations do not yet have complete visibility into how many of these systems they are running or what decisions they are influencing. This is not a technology gap; it is a governance gap. The systems and data flows exist, but what often does not exist is the structured accountability – the documented risk assessments, the human oversight controls, the audit trails – that the Act requires as evidence of compliance, And not as artifacts of a one-time exercise, but as the output of an ongoing operational discipline.

The consequences of that gap are material. Non-compliance can result in fines of up to 35 million euros or 7% of global annual turnover, whichever is higher. Beyond August 2026, a second deadline arrives on August 2, 2027, when AI embedded in regulated products – medical devices, healthcare diagnostics, financial services infrastructure, and critical systems – must also comply. Organizations that treat the August 2026 deadline as a finish line, rather than the first checkpoint in a continuous compliance posture, will find themselves rebuilding under pressure a year from now.

What the Next 90 Days Must Accomplish

Ninety days is enough time to close the gap, if the work is sequenced correctly – inventory before risk assessment, risk assessment before control design, control design before evidence generation.

Audit for Complete AI visibility

The starting point is not risk assessment, it is inventory. Every AI tool influencing decisions about people – internal or external, vendor-built or proprietary, sanctioned or shadow – must be identified and catalogued. Organizations frequently believe they have this visibility; they frequently do not. Unsanctioned AI adoption by employees, AI capabilities bundled into SaaS platforms, and legacy automated decision tools that predate the current governance conversation all create blind spots. Compliance cannot be demonstrated for systems that are not known to exist.

Assess and Document Risks Against the Act's Specific Requirements

Once the inventory is complete, each high-risk system requires a structured risk assessment: how does the system affect fairness, transparency, and accountability? What are its failure modes? What populations does it affect, and how are those effects distributed? Documentation is not compliance, it is the evidence of the compliance work. Governance that cannot produce evidence cannot sustain trust, and under the EU AI Act, it cannot survive an audit.

Operationalize Human Oversight

The Act requires that humans retain meaningful oversight over high-impact AI decisions – not nominal oversight, where a human technically exists in the loop but has no practical ability to interrogate or override the system, but substantive oversight, where the process is designed to surface AI errors, flag anomalies, and enable correction. Building that capability requires more than policy language, it requires process design, tooling to surface model behavior, and defined escalation paths. The oversight must be architecture-aware, calibrated to the autonomy and impact level of each system.

Build AI Literacy Across the Organization

Tools are necessary, but they are not sufficient – education is the control that makes everything else work. Compliance officers who do not understand how a model produces an output cannot assess whether the human review process is substantive. Developers who have not internalized the Act's transparency requirements cannot build systems that meet them. From engineering to legal to business operations, role-specific AI governance literacy is not a training program, it is a control surface.

Establish Continuous Monitoring, Not Point-in-Time Compliance

The EU AI Act is not satisfied by a compliance report dated July 31, 2026. It requires ongoing monitoring of AI system performance, regular review of risk assessments as systems evolve, and documented evidence that the organization is managing AI risk as an operational discipline. Compliance is not a state; it is a cadence.


Demonstrable, Not Aspirational

The regulation will ask a straightforward question: can you demonstrate that your high-risk AI systems operate within defined boundaries, under meaningful human oversight, with documented accountability for the decisions they influence? Not can you assert it, not can you point to a governance document, but can you demonstrate it – with evidence, with audit trails, with operational controls that function as designed?

Governance becomes aspirational rather than demonstrable when it exists only on paper. The organizations that will be positioned well on August 2 are the ones that treated the last ninety days not as a compliance sprint, but as the foundation of an AI governance capability they will need to sustain long after the deadline has passed.

If your organization is uncertain where it stands, UltraViolet Cyber's AI governance team works directly with organizations navigating EU AI Act compliance – from initial AI inventory and risk classification through control implementation, oversight process design, and evidence generation. While the deadline is fixed, the path to it is not.

Talk to an UltraViolet Governance expert