SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers
Flux result 58ecd10a a26d 4f25 998c 79a177e42823

ShareGate survey says AI exposed data at 29% of firms

Wed, 22nd Apr 2026 (Today)

ShareGate has published survey findings showing that 29% of organisations have experienced AI-driven data exposure incidents.

The research covered more than 850 IT and security leaders across the US, Canada and Europe.

The study highlights a gap between confidence in governance and reported problems after the adoption of Microsoft 365 AI tools, including Copilot. While 93% of respondents said they were confident their Microsoft 365 governance framework could support AI responsibly, only 51% had completed an organisation-wide governance review after enabling those tools.

Among organisations that reported sensitive information being surfaced by AI systems, the exposed material included customer records, sensitive internal documents, personal data, HR records, financial data and proprietary intellectual property. Customer records were cited most often, at 36%, followed by sensitive internal documents at 31%. Personal data and personally identifiable information, along with HR records, were each reported by 30% of respondents, while financial data stood at 25% and proprietary IP at 21%.

The findings also indicate that AI deployments are increasing operational pressure on IT and security teams. More than 70% of respondents said AI had increased their governance burden since they enabled AI tools, and nearly eight in 10 were at least moderately concerned about AI accessing content whose permissions had not been reviewed recently.

Governance gap

The results suggest many organisations are moving ahead with AI roll-outs while governance processes remain incomplete. More than three-quarters of respondents said governance work such as permission audits, clean-up and lifecycle management had at least a moderate effect on their confidence in AI investments.

The survey also found that AI-related spending is already taking up a notable share of IT budgets. More than 80% of respondents said they expected measurable return on investment from Microsoft 365 AI initiatives within 18 months.

That expectation appears to be tied to concerns about data controls and internal readiness. The research found that eight in 10 organisations are likely to bring in an external partner for an AI governance assessment before expanding their AI use further.

Benjamin Niaulin, Vice President of Product at ShareGate, said the data showed that existing governance weaknesses were becoming harder to ignore as AI tools reached deeper into corporate information stores.

"AI and Copilot didn't create the governance problem. They exposed it," said Benjamin Niaulin, Vice President of Product at ShareGate.

"IT teams have been papering over fragmented tools and blind spots for years. Now every oversharing group and forgotten permission is one Copilot prompt away from becoming a real incident. You can't govern what you can't see, and right now, most teams can't see it."

Regional sample

Centiment conducted the survey on behalf of ShareGate in March 2026. It included IT and security leaders in the United States, the United Kingdom, Canada, France, Germany, the Netherlands and Ireland, covering roles in IT leadership, security leadership, data governance, compliance and digital workplace leadership.

The research adds to a wider debate over whether corporate data management practices are keeping pace with generative AI adoption in workplace software. Microsoft 365 environments often contain large volumes of documents, messages and records with complex permission structures, making oversight difficult when AI assistants can draw on those stores to answer prompts or generate summaries.

For many organisations, the issue is less about whether to deploy AI tools than whether internal data controls are strong enough to limit what those systems can surface. The survey indicates that concern is moving beyond theory, with reported incidents already affecting a sizeable minority of respondents.

The numbers also highlight a tension between self-reported confidence and operational reality. A strong majority believe their governance framework can support responsible AI use, yet far fewer have carried out a full review since enabling Microsoft 365 AI functions. That mismatch may help explain why outside support is emerging as an option for teams that lack the time or visibility to review permissions, clean up old content and manage access rights at scale.

For suppliers in the Microsoft 365 management market, that creates an opening around governance and oversight rather than AI deployment itself. For customers, the survey suggests that checking permissions, reviewing content exposure and updating lifecycle rules is becoming part of the cost of scaling workplace AI.