APRA's AI Letter Turns Governance Into an Operating Test
APRA's April 2026 letter makes AI governance practical: boards need AI literacy, executives need lifecycle accountability, and regulated entities need controls that prove what happened before AI changes data, resilience, suppliers, or authority.
Article focus
Treatment: photo
Image source: Mikhail Nilov on Pexels
License: Pexels License
Executive summary
APRA's April 2026 AI letter is not asking regulated entities for a better AI policy document. It is asking whether boards, accountable executives, risk teams, and technology leaders can see, control, test, and evidence AI use across security, suppliers, resilience, and the full AI lifecycle.
APRA's April 2026 letter to industry is a useful reset for AI governance because it treats AI as an operating risk, not a procurement category. The letter recognises AI's productivity and customer-experience upside, but it also makes a blunt point: governance, risk management, assurance, and operational resilience are not keeping pace with adoption.
That is exactly the gap security and risk teams feel when AI moves from experiments into software engineering, claims triage, loan processing, fraud disruption, customer interaction, and internal analysis. The issue is no longer whether AI is being used. It is whether the institution can explain which AI use exists, which data and authority it touches, which suppliers sit underneath it, and what controls fire when behaviour changes.
APRA Is Moving the Test From Approval to Evidence
The most important message is that approval is not enough. APRA expects boards to maintain enough AI literacy to set direction and challenge risk, and to oversee an AI strategy aligned to risk appetite, tolerance settings, third-party dependency monitoring, and resilience triggers.
That creates a practical evidence burden. A board paper that says a vendor has been approved does not show what happened when a staff member uploaded customer data, when an assistant wrote production code, when a model drifted, when an agent called a tool, or when an AI supplier changed behaviour. Runtime evidence is what connects strategy, risk appetite, and actual use.
Security Now Includes Nonhuman Actors and Agentic Workflows
APRA calls out prompt injection, data leakage, insecure integrations, exploit injection, and the misuse of autonomous AI agents. It also notes that identity and access management has not adjusted to nonhuman actors, while AI-assisted software development is straining change and release controls.
This is the reason AI security cannot stop at user awareness training. Agents, copilots, browser tools, coding assistants, MCP servers, OAuth-connected apps, and embedded vendor features all become places where data or authority can move. Company policy has to be enforceable before prompts, uploads, OAuth grants, memory writes, and delegated tool actions create exposure.
Shadow AI Is a Preventive-Control Problem
APRA's observation on staff use of enterprise AI tools outside approved control frameworks is especially important. The letter is clear that policy direction and after-the-fact detection are weak substitutes for enforceable technical restrictions and robust preventive controls.
That should change the messaging inside regulated entities. The goal is not to shame experimentation or block useful AI. The goal is to put the decision point in the workflow: allow low-risk use, warn when data or context is sensitive, block disallowed exposure, route high-risk use for review, and retain evidence for accountable executives, risk, security, compliance, and audit.
AI Inventory Has to Include the Supply Chain
APRA expects entities to maintain an inventory of AI tooling and use cases, but the supplier section makes clear that a useful inventory cannot stop at application names. AI is embedded in SaaS, developer tools, platforms, foundation models, training data sources, and fourth-party services that may be opaque to the regulated entity.
The operational question is therefore broader than "which tools have we approved?" It is "which prompts, files, memory stores, connectors, model providers, sub-processors, and tool actions are part of the AI supply chain for this use case?" Without that view, concentration risk, exit planning, audit rights, model change management, and incident notification remain contractual hopes rather than operational controls.
Assurance Must Become Continuous Enough for Dynamic Systems
APRA's assurance critique is direct: point-in-time and sample-based assurance is poorly matched to systems that are probabilistic, adaptive, or prone to drift. Internal audit and second-line risk teams also need the technical capability and tooling to assess AI systems, including agentic workflows and AI-assisted code generation.
That does not mean every AI interaction needs the same review burden. It means monitoring should be proportionate to criticality and tied to purpose, limitations, explainability, model behaviour, customer impact, and control breakdowns. The practical architecture is policy, control, and observability at runtime, with audit evidence generated from the same events that protected the workflow.
Operational Resilience Requires Fallback, Not Just Monitoring
APRA expects entities to assess the implications of AI reliance for operational resilience and business continuity. Where AI supports critical operations, credible fallback processes are required.
That point matters because AI is often introduced as an efficiency layer before resilience ownership is mature. If a claims workflow, fraud process, developer pipeline, customer-service operation, or risk report depends on AI, the institution needs to know what happens when that AI produces unsafe output, degrades, becomes unavailable, changes supplier behaviour, or must be isolated after an incident.
Regulated AI Governance Needs Runtime Evidence
APRA's letter makes one operating requirement clear: regulated AI governance is an evidence problem. Boards and executives need proof that AI use is aligned to risk appetite. Security needs preventive controls before data and authority leave. Risk and audit need lifecycle evidence. Technology teams need visibility across tools, agents, OAuth paths, and suppliers.
3LS fits that gap as the runtime governance layer. It classifies prompts and data, applies allow, warn, block, or review decisions before exposure, tracks agent and tool activity, and preserves the evidence needed to show how policy operated in real workflows.
The Immediate Action Is to Map the Operating Surface
Regulated entities should start with the surface APRA is actually describing: approved AI tools, shadow AI use, customer-facing AI, AI-assisted software delivery, autonomous workflows, supplier dependencies, high-risk decisions, critical operations, and data flows into prompts, files, memory, connectors, and tools.
Once that surface is visible, the next decisions become concrete. Which uses require human review? Which data classes cannot leave? Which suppliers are critical? Which workflows need fallback? Which model changes trigger review? Which events prove that controls worked? That is where AI governance becomes operational enough for APRA's expectations.
Continue reading