Internal auditors fear AI fraud but lack readiness
Internal audit teams are increasingly concerned about artificial intelligence-driven fraud, but most do not believe they are ready to detect it, according to research from the Institute of Internal Auditors.
The study, based on a survey of more than 370 senior internal audit leaders, found that 85% view AI-enabled fraud as a moderate to high risk. Fewer than four in 10 said their function is adequately prepared to detect it, and 62% described their teams as unprepared or only minimally prepared.
The results point to a widening gap between risk awareness and operational readiness. They also suggest that greater familiarity with AI-driven fraud methods increases concern rather than reassurance: leaders who said they were very familiar with the technology rated the threat higher than peers with less knowledge.
Visible threats
Phishing remains the leading worry. The survey found 88% identified AI-powered phishing as a primary concern. Fictitious financial documentation, including fabricated invoices and supporting records, ranked next at 65%.
Less visible forms of crime drew less attention. Synthetic identity fraud ranked lowest at 27%, even as it grows across financial services and online commerce. The findings suggest audit teams prioritise attacks that leave clear artefacts, such as emails, voice calls, or documents, over fraud that hides within data, account creation, and onboarding processes.
The survey also showed uncertainty about the scale of the problem within organisations. Around 34% of leaders said they were unsure whether their organisation had been targeted by AI-enabled fraud. That lack of clarity raises questions about monitoring and reporting, particularly when incidents may span multiple teams, including finance, cybersecurity, risk, and customer operations.
Barriers to audit
Internal audit leaders cited structural constraints that limit their response. Limited access to appropriate technology and tools was the most common barrier, cited by 57% of respondents. Skill gaps followed closely, with 55% reporting insufficient staff expertise related to AI risk.
Resource pressures also featured heavily. Budget constraints were cited by 46% of leaders. Competing organisational priorities affected 43%, as did insufficient time to dedicate to AI-specific risk management.
These constraints shape audit teams' current role. The most common activities remain conventional: 57% said they focus on assessing control weaknesses, while 51% advise on policy. Fewer teams appear to be set up for active fraud hunting or rapid adaptation when threat actors change tactics.
How fraud changes
The research described AI as a driver of more sophisticated fraud techniques. Deepfakes can imitate voice and video. Automated phishing can scale social engineering, and document-generation tools can produce plausible invoices and financial records.
It also highlighted features that can appear in fabricated documents, including mismatched or overlapping formatting, small mathematical errors, and inconsistent placement of logos or barcodes compared with standard vendor templates. Widely accessible platforms such as ChatGPT can generate such documents and mimic vendor formats and line-item details.
AI for defence
While AI increases fraud risks, it is also a "dual-use capability" for defenders. Many internal audit teams plan to expand their own use of AI, with 83% of respondents saying they intend to increase usage in the next year.
The research linked AI use to deeper analysis and greater efficiency in audit work, including automated verification in operational controls. One example was revising web-based forms to include checks for "human entry," which can block AI-driven bot submissions and reduce fraudulent applications.
Over the longer term, wider AI use within audit also changes skill requirements. Hands-on use can build "detection intuition" and a practical understanding of how AI-generated outputs look and behave in real workflows. This experience can improve recognition of fabricated documentation and unusual patterns in payment or transaction data.
Priority actions
The research set out three priorities for internal audit teams. The first is skill building, described as the single most important action, supported by continuous, adaptive training and practical exercises in a controlled environment. Development should range from foundational awareness to advanced upskilling, specialised training, or formal certification for staff with detection responsibilities.
The report said that the top priority is continuous, adaptive training and practical exercises to build 'detection intuition'.
The second priority is aligning AI use across the organisation. It called for greater visibility into where AI is embedded in business processes, including "AI inventories" and structured inquiries across business units. This information should feed into audit planning, risk assessments, and audit procedures, reducing the risk of AI governance being managed in silos.
The third priority is collaboration across functions, with effective defence treated as a shared responsibility. It pointed to closer coordination with technology, cybersecurity, and risk teams, alongside engagement with business units that already use AI in day-to-day processes.
As AI adoption spreads through finance, customer operations, and back-office processes, internal audit will likely face growing pressure to modernise tools, strengthen specialist skills, and build closer ties with cybersecurity and technology teams.