Outside the approved estate, with no owner, policy evidence, or exception path.
2 week AI Audit
For finance teams that need visibility into every AI tool, agent, and embedded feature, the Shadow AI risk outside approved channels, and the outcomes AI adoption is actually creating.
In two weeks, the Audit turns inventory, usage, spend, and evidence gaps into a board-ready operating read: where AI is creating value, where it is creating exposure, and what to fund next.
The first read makes AI value and AI risk specific.
Representative findings show the shape of the deliverable before the full working-paper pack follows.
Annualized AI license spend above observed usage needs, driven by seats on higher plans than workflow usage supported.
Live in workflows with minimal eval coverage, too thin to support an audit opinion without exceptions.
Recoverable capacity from role-level AI fluency gaps in recurring finance workflows.
Where AI is creating value. Where AI is exposing risk.
The Audit separates visible adoption from unmanaged Shadow AI, then ties both to spend, risk, and outcomes in a board-ready report.
Finance leaders see which AI is approved, which AI is already running outside IT's view, where usage is producing outcomes, and where risk lacks evidence or an owner.
See how the findings become AI Transformation →Four moves. One report.
Two weeks of focused access and working sessions. The same team runs any follow-on workstream.
Discovery
Kickoff with finance, ops, risk, and engineering. Confirm the systems, teams, and AI surfaces to inspect.
Inventory and risk baseline
Approved tools, Shadow AI, embedded SaaS AI, internal agents, license waste, and usage depth, mapped by business unit.
Vendor evaluation
Duplicate tools, underused seats, vendor exposure, and AI behaviors that need policy, evals, or owner assignment.
Board-ready report
Board-ready operating read: visibility, risk, adoption outcomes, priority owners, and the next funded workstream.
Every Audit lands one of four opinions.
The same discipline finance has used for a century, applied to the AI estate.
AI estate visible. Material risk contained.
With exceptions noted.
Do not extend in current state.
Access was insufficient.
AI estate visible. Material risk contained.
AI estate is visible, evidenced, and material risk is contained.
Inventory complete, Shadow AI under threshold, governance evidence in place, adoption outcomes traceable.
With exceptions noted.
Most of the estate is in order. Specific named exposures require remediation before next quarter.
Named Shadow AI hotspots, specific evals gaps, specific roles below fluency baseline.
Do not extend in current state.
Material exposures span multiple anchors. New AI workstreams should pause until remediated.
Unmanaged Shadow AI on regulated workflows, no evidence systems, no policy enforcement, internal agents in production with no evals.
Access was insufficient.
Visibility access was insufficient to issue an opinion.
MDM coverage gap, IDP access denied, fleet too small for valid sample.
What's a material AI failure?
Financial audit set materiality thresholds a century ago, often around 5% of net income. AI teams are still guessing.
The Audit defines materiality per use case, on two axes: impact severity across regulatory, financial, and reputational harm, and frequency across one user, one query type, one workflow class, or portfolio-wide exposure. Findings above the materiality threshold land in the opinion as audit exceptions. Findings below land in the appendix.
Materiality is set jointly in the kickoff session. TrustEvals walks the customer through industry defaults for their finance sub-segment. The customer signs off. The threshold lands in the engagement letter, and the opinion is issued against it.
A finding without a materiality threshold is a complaint. A finding above threshold is an audit exception.
The numbers a board has never seen before.
Most boards see vendor counts and seat licenses. The Audit returns duplicate spend, Shadow AI, internal agents in production, adoption outcomes, risk without evidence, and the workforce-fluency stage. On one page.
Audit findings. Sequenced workstreams.
The Audit shows which AI work is creating value, which Shadow AI and policy gaps create exposure, and which teams need enablement. That decides what runs next.
AI Transformation
Turn the adoption findings into production workflows with measurable revenue, margin, or cycle-time outcomes.
Open ai transformation →AI Governance
Turn the risk findings into policy, continuous evidence, and framework-ready proof before the audit committee asks.
Open ai governance →AI Fluency
Turn usage and capability gaps into role-specific enablement, manager telemetry, and stronger day-to-day AI judgment.
Open ai fluency →The Audit fits the risk model finance already uses.
Three lines of defense means business owns the work, risk oversees the controls, and audit tests whether the evidence holds.
| Line | Plain-English role | TrustEvals role |
|---|---|---|
| First line1LoD | Business teams own the AI workflow | AI Transformation + AI FluencyWorkflow owners capture the upside and build the role-level fluency to operate it. |
| Second line2LoD | Risk and compliance oversee the controls | AI GovernancePolicy, evidence, exception handling, and framework mapping. |
| Third line3LoD | Internal audit and external auditors rely on the evidence | AI Audit + Evals platform substrateOpinion, materiality, working papers, and substantive testing evidence. |
Access. That is all.
IDP read access, SaaS admin portals, code hosting at the org level, and an MDM channel for endpoint discovery. That is enough to map visibility, Shadow AI, usage, and evidence gaps.
Common questions. Direct answers.
No. It is the entry diagnostic that maps AI visibility, Shadow AI, spend, usage, risk, and evidence gaps before recommending which workstream to run next.
The Audit is fixed-scope for medium-to-large finance teams. Smaller organizations are scoped differently because the discovery surface changes.
Two weeks. Week one gives discovery, inventory, Shadow AI, and spend baseline. Week two turns vendor evaluation, risk findings, and adoption outcomes into the board-ready recommendation.
Sometimes, if you already know your AI inventory, Shadow AI exposure, usage depth, risk posture, and exact workstream gap. Most teams that try to skip the Audit pause two weeks in to build that baseline anyway.
We scope around employee and device footprint, SaaS admin surface, developer tooling, internal agents, and compliance exposure. The follow-on work scales from the findings.
No. The AI Audit is a use-case-specific operating diagnostic that produces evidence your SOC 2 and ISO 42001 auditors can rely on. We sit one layer below the framework audit.
Yes. Same memo. The output is a structured audit memorandum with the opinion, materiality threshold, scope, exceptions, and remediation sequencing on the first three pages. The committee gets the same shape they already read from external auditors. No separate audit-committee cut.
Yes. We deliver the working-paper substrate. Big-4 and mid-tier audit firms increasingly co-engage with us when their finance clients need AI assurance evidence the framework auditor cannot produce alone.
Audit cadence.
Finance already plans audit budgets around cadence. The AI opinion should have the same rhythm.
| Cadence | Timing | What happens |
|---|---|---|
| Initial | Two weeks | Full diagnostic. Opinion issued. |
| Quarterly | Three days | Refresh the inventory, re-run materiality scan, refresh the opinion. |
| Event-driven | Same-week | Model swap, vendor change, new internal agent in production, regulatory letter. |
Models do not sit still. Continuous over annual is how the opinion stays valid.
Book the AI Audit.
Thirty minutes to size the discovery surface: employees, devices, SaaS admin access, developer tooling, internal agents, Shadow AI exposure, and the outcome read you need at the end.