SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers

Exclusive: SAS says trust gap slows ANZ AI rollout

Mon, 15th Dec 2025

Australia and New Zealand have emerged as global frontrunners in generative AI adoption, but a widening gap between confidence in AI systems and the structures underpinning them is now the region's biggest risk, according to Head of AI and Innovation for Australia and New Zealand, Jonathan Butow.

Butow said the latest findings from SAS's IDC Data and AI Impact Report: The Trust Imperative reflect what he has observed across more than 60 organisations this year.

"Australia and New Zealand are leading in Gen AI adoption. So 92% of organisations are moving beyond experimentation and they're looking at more high impact, trusted use cases," said Jonathan Butow, Head of AI and Innovation, SAS.

He noted that organisations have begun reframing GenAI's value proposition, moving away from cost-cutting and towards measurable improvements in decision-making.

"We're seeing them actually prioritise decision-making process efficiencies over cost savings," added Butow.

Across the region, banks and insurers remain the most advanced adopters, supported by long-standing analytics frameworks. Healthcare is rising quickly as hospitals test digital twins, AI-supported clinical triage and predictive tools to relieve pressure on patient flow and workforce capacity.

Trust mismatch

The research highlights that ANZ organisations report relatively high trust in generative and agentic AI, despite significantly lower investment in the governance measures required to make these systems reliable. Butow said this contradiction is now one of the most important issues shaping the market.

"When AI goes wrong, people blame the model or the data, but the real issue is usually a breakdown in governance and accountability. People lose confidence in systems, and the adoption stalls," added Butow.

While most leaders surveyed claimed some level of trust, the proportion with robust safeguards remains limited.
"Seventy-eight percent of leaders say they trust AI, but only 40% have trustworthy systems with governance," added Butow.

The disconnect between perceived and actual trust is also evident when comparing technologies. Butow said traditional AI models - longstanding, explainable and well-established - are viewed with less confidence than more humanlike or conversational systems.

"Organisations are more willing to trust agentic AI and generative AI than traditional AI, which in itself is more explainable and has more rigour behind it," added Butow.

He described this as a trend that "makes me nervous", pointing to the risk that assumption replaces accountability when AI appears familiar or intuitive.

Agentic caution

Despite strong interest, ANZ organisations are adopting agentic AI more slowly than global peers.
"Adoption of agentic AI is sitting below the global average in Australia and New Zealand, so we're about 14% lower," added Butow.

He attributed this to cultural caution, incomplete regulation and ongoing work on data pipelines and responsible design. Many organisations are still preparing their infrastructure, including synthetic data systems and audit mechanisms for autonomous decision-making.

Quantum claims and hype

One of the report's more surprising findings was the number of respondents reporting use of quantum AI.
"Thirty percent of respondents claim that they're already using quantum AI," added Butow.

He said this figure reflects how quickly hype can leap ahead of operational readiness, noting that most organisations he encounters are not yet close to deploying quantum-enabled systems.

Barriers to value

Butow said the region's most persistent challenge is turning AI pilots into enterprise-scale outcomes.
"There's a massive gap in Australia and New Zealand on being able to measure the value of their AI strategy. ROI is a big pressure point," added Butow.

He said organisations often underestimate the hidden cost of scaling AI, with cloud consumption, data pipelines, governance frameworks and talent accounting for the majority of expenses.

"The hidden cost isn't the model, it's actually the data pipelines, the governance, the talent," added Butow.

Trust remains the ultimate barrier to widespread deployment.
"If a leader or an executive can't explain how it works, they're not going to deploy it into critical workflows," added Butow.

Sector leaders

The financial services sector is leading the region's adoption of responsible AI, supported by regulatory oversight and decades of investment in analytics capability.
"They also have more mature governance frameworks, because the regulators have spent a lot more time looking at banks," added Butow.

Insurance follows closely, particularly in fraud and risk modelling. Healthcare is accelerating because of increasing demand pressures and the scale of available data.

Public sector adoption is rising as agencies explore emergency management, citizen services and major event planning.

Building trustworthy systems

Butow said organisations now need to shift from isolated AI experiments to an enterprise-level view of data, compliance and the lifecycle of automated decision-making.

"Move away from just thinking about AI systems as proof of concept into what this means for the entire enterprise, and think about security, lineage, oversight and compliance," added Butow.

He urged leaders to prepare for emerging global regulation and avoid relegating workforce education to a late-stage task.
"Invest in enablement and education. A lot of the trust that we're talking about actually comes from understanding. It comes from literacy," added Butow.

Priorities for 2026

Butow said organisations looking to sustain their AI investments over the next year must unify data foundations, embed governance into every model, focus on decisioning rather than experimentation, and develop guardrails for safe autonomy.

"Agentic AI, like traditional AI, is only going to deliver value if it's well governed," added Butow.