SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers
South africa boardroom hourglass chains cybersecurity anxiety

AI projects face long delays amid rising security risks

Thu, 11th Dec 2025

AvePoint has warned that artificial intelligence projects are slipping by up to a year, as three-quarters of organisations report AI-related security incidents.

The data security and governance company has released new research on enterprise AI use in 2025. The study points to a growing gap between AI ambitions and what organisations are able to deploy in practice.

The report surveyed 775 professionals with responsibility for AI, information management or data security. Respondents came from 26 countries and sectors including financial services, government and healthcare.

AvePoint said most organisations are moving from pilots towards broader AI deployment. It found that delays and setbacks are now more common as projects encounter structural data problems.

Rollouts delayed

The study found that AI deployment delays now average nearly six months. Some organisations reported delays of up to 12 months.

Data quality and data security issues are the main reasons for project slippage. Inaccurate AI output was a factor for 68.7% of respondents. Data security concerns were a factor for 68.5%.

Organisations also highlighted AI hallucinations as a specific threat. 32.5% said hallucinations are the most extreme risk from generative AI assistants.

Employee attitudes are adding further friction. 64.2% of respondents said staff "lack of perceived value" is a major barrier for rollout.

This finding suggests that many employees do not yet see clear benefits from AI tools. It also indicates pressure for more structured AI training and change management.

Dana Simberkoff, Chief Risk, Privacy and Information Security Officer at AvePoint, said many businesses are underestimating the operational demands of AI oversight.

"We're seeing organizations treat AI governance as a checkbox exercise rather than an operational imperative," said Dana Simberkoff, Chief Risk, Privacy and Information Security Officer, AvePoint. "The gap between having policies and implementing them effectively is where most security incidents occur. This challenge becomes exponentially more critical as organizations move toward agentic AI systems that can act independently and make decisions without human oversight. Basic security measures cannot keep pace with the complexity and sprawl of AI-generated data, leaving organizations vulnerable unless they evolve their governance models to handle autonomous AI agents."

Governance paradox

The research identified what AvePoint called an AI governance paradox. Organisations report confidence in their information management programmes, but still sustain high levels of AI-related security problems.

90.6% of respondents said they have effective information management programmes. Only 30.3% said they have effective data classification systems in place.

Among organisations that rated their information management as most effective, 77.2% had still experienced data security incidents. This suggests that subjective readiness assessments often do not match actual risk exposure.

Many organisations are still working on formal AI guardrails. 43.4% said they are actively developing AI policies.

Unsanctioned AI use is also rising. The study reports that shadow use of AI systems continues to increase each year, despite growing governance efforts.

Data growth strain

The report links many AI risks to the volume and age of corporate data. It highlights a sharp rise in AI-generated information, alongside existing stores of older content.

Nearly 20% of organisations expect generative AI systems to create more than half of their data within the next 12 months. Current average data growth rates of 23.8% are forecast to reach 31.6% next year.

84.6% of organisations use multiple storage platforms. This trend increases the risk of data sprawl and fragmented oversight.

Respondents estimated that 70.7% of their organisational data is more than five years old. This raises concerns about training AI systems on outdated or low-quality information.

John Peluso, Chief Technology Officer at AvePoint, said organisations are struggling with both volume and control of AI-related data.

"The exponential growth in AI-generated content is fundamentally changing how organizations must approach data security and governance," said John Peluso, Chief Technology Officer, AvePoint. "We're seeing enterprises struggle not just with the volume of new data, but with maintaining data lineage and ensuring quality control when AI systems are both consuming and creating information at scale. The organizations succeeding in this environment are those building governance directly into their AI workflows rather than treating it as an afterthought."

Security incidents widespread

The study found that more than 75% of organisations have experienced AI-related security breaches. These incidents include data exposure issues linked to generative AI tools and autonomous systems.

Many of these breaches appear linked to gaps between policy and practice. AvePoint's findings show that formal programmes exist in many firms, but are often not fully operational at the data level.

Concerns about decision-making by autonomous AI agents are also rising. The report notes that organisations increasingly consider agentic AI as a distinct governance challenge.

Security leaders are focusing on issues such as who approves AI actions. They are also examining which datasets these systems can access.

Investment response

Despite the setbacks, the study shows that many organisations are responding with new investments. These focus on governance tooling, security controls and workforce skills.

64.4% of respondents said they are increasing investment in AI governance tools. 54.5% are adding more funding for data security tools.

Most organisations are also working on AI skills. 99.5% said they are implementing some form of AI literacy intervention.

Role-based training is proving popular. 79.4% of respondents ranked this type of training as highly impactful.

Organisations are also examining the impact of AI initiatives in more structured ways. 73.9% said they use both quantitative and qualitative feedback methods to assess AI programme effectiveness.

Chris Shaw, Channel Director, UKI & SA at AvePoint, said the current AI debate often overlooks specific, solvable issues.

"We're seeing more pessimism about AI these days, but these criticisms tend to be sensationalised and short on specifics. This report is helpful because it pinpoints the challenges that many are experiencing, and the good news for all of us is that these are surmountable problems. Organisations that take common sense steps tackle challenges like AI-related security incidents-which 75% of orgranisations are currently experiencing-will come out on top, and those that fall victim to vague, sensationalised pessimism risk falling behind. This means there's a huge opportunity for channel organisations that act proactively, today, to solve these problems for end customers," said Shaw.