Continuous evidence, framework-agnostic.

Built for finance. ISO 42001, NIST AI RMF, EU AI Act, AIUC-1, SR 11-7. One pipeline, every framework.

The four frameworks we cover

One trace pipeline. Four framework outputs.

The same evaluation pipeline produces every output your auditor asks for. Map once, attest many.

ISO 42001
International · Certifiable
Scope: AI management system standard

Governance, risk, continuous improvement across the AI lifecycle. Clauses 4–10. Audited by an accredited body. The certification track procurement teams ask for.

NIST AI RMF
US · Voluntary
Scope: Risk management framework

GOVERN, MAP, MEASURE, MANAGE. Widely cited by enterprise AI teams and US federal procurement. Generative AI Profile published July 2024.

EU AI Act
EU · Binding law
Scope: Tiered risk classification

High-risk systems require risk management, technical documentation (Annex IV), human oversight, post-market monitoring, and incident logs.

AIUC-1
US · Private-sector standard
Scope: Agent-level certification

The SOC 2 for AI agents. Six categories: data and privacy, security, safety, reliability, accountability, societal risks. Published by AIUC Inc.

How the pipeline works

Production evidence in. Every framework out.

Production traces feed a measurement engine. The same evidence is mapped to whichever framework your auditor is holding. No second pipeline. No quarterly scramble.

  • Signal: production traces.Every interaction tagged with classification, source, baseline, policy outcome, and human owner. Captured in real time.
  • Engine: evidence pipeline.Evaluation against per-use-case baselines. Drift detection. Incident traces. Versioned policy enforcement.
  • Outputs: framework packs.ISO 42001 packet, NIST profile, EU AI Act Annex IV file, AIUC-1 attestation, SR 11-7 model file. Pulled on demand.
The sequencing correction

Visibility comes first.

The AI Audit produces the operating read. From it, three outputs flow: operating, governance, fluency.

Governance is one of those outputs. Not the foundation under them.

Programs that lead with governance stall. Operators have nothing to show the board, value capture has no home, and the policy lives in a PDF nobody reads. Lead with visibility, and the assurance evidence falls out as a byproduct.
Sequencing posture

When continuous evidence fits. When it does not.

The honest framing. Compliance shape follows what is actually running, not the other way around.

Continuous

Always-on evidence

When AI is in production, in front of customers, or under regulator scrutiny. The system behavior changes between attestations. Continuous evidence closes the gap.

Periodic

Annual certification

When procurement needs a one-time SOC-2-style stamp. Useful for procurement gates. Not sufficient when the model can change tomorrow.

Layered

Both, sequenced

Most enterprises running AI at scale want both. Continuous evidence under the hood. Annual certification on the surface for procurement.

Deferred

Not yet

Pre-production AI features. Internal pilots without external exposure. The right move is the AI Audit first. Compliance shape arrives once the operating read is in place.

Book the AI Audit.

Thirty minutes to size the discovery surface: employees, devices, SaaS admin access, developer tooling, internal agents, Shadow AI exposure, and the outcome read you need at the end.

FAQ

Compliance, asked plainly.

No. Certification is a regulator or accredited body function. TrustEvals produces the evidence a certifier needs, plus the evidence that the certification is still accurate next month.

Most changes land at the mapping layer. The underlying evidence (baselines, traces, incident logs) stays. The mapping changes. We publish updates and notify customers in the tier that depends on it.

Yes. The infrastructure is the same. The framework-specific work is the mapping layer on top. Most customers pick a primary framework based on auditor or buyer requirements, then add the others as overlays.

SOC 2 certification is on the roadmap. Current status is honestly disclosed at /legal/trust. We do not claim what we do not have.