Exclusive: ManageEngine’s Vinayak Sreedhar highlights AI adoption challenges
Australian and New Zealand businesses are racing to adopt artificial intelligence, but many are struggling to manage its human and organisational impacts, according to new data from ManageEngine's Navigating AI Anxiety: A/NZ Organisations in 2025 report.
The study, based on responses from 300 ICT professionals across Australia and New Zealand, shows that while 93% of organisations have adopted AI - 61% at a company-wide level - serious concerns remain.
More than half of respondents (57%) feel anxious about integrating AI, and 97% say their organisation lacks some form of AI-related skill, particularly in areas such as machine learning, AI governance, and model training.
Vinayak Sreedhar, Country Head at ManageEngine for ANZ, said the results reflect a growing tension between AI enthusiasm and readiness. "It's a very startling kind of data," he said. "There's a high level of adoption, yet a lot of apprehension. This primarily comes down to a skills issue."
The skill gap behind AI anxiety
The report reveals that while 63% of respondents feel they cannot afford to ignore AI, only two-thirds believe their AI leaders are truly up to speed. Sreedhar said this disconnect is fuelling workplace stress and uncertainty.
"There seems to be inadequate AI governance and training from the employer's perspective," he said. "Many organisations today lack formal training sessions and clear policies."
This anxiety is already taking its toll. The report found that 59% of respondents frequently feel stressed about keeping up with AI changes, 34% feel their job security has decreased, and 31% report experiencing more anxiety or burnout at work.
To combat this, Sreedhar emphasised the need for frequent and inclusive training. "One or two sessions won't cut it," he said. "Training has to happen in a continuous loop, with clear milestones and constant communication."
Organisations are responding, with many turning to practical, hands-on strategies. According to the report, 41% are relying on on-the-job AI learning, 38% on mentorship and coaching, and 37% on in-house workshops. "You need to keep briefing your internal teams, hear out their concerns, and address them along the way," Sreedhar said.
Involving employees and building trust
Sreedhar stressed that employee involvement is key to a successful AI rollout. "This cannot be a siloed project driven by a few folks," he said. "You need to engage functional leaders across departments and maintain transparency about what you're trying to achieve."
Failing to include employees in the process, he warned, is a "recipe for failure." Resistance, disengagement, and even project collapse are real risks when staff are not on board. "The success of any product depends on how well it is adopted. If there's resistance from the larger workforce, we don't see such products being successful."
Organisations are trying to bridge this gap. The report shows that 39% are now using AI to support - not replace - human roles, while 36% are promoting continuous learning and adaptability. Another 35% are actively providing training to boost AI-related skills.
Despite the challenges, 67% of employees reportedly trust the output of AI tools used by their organisation, and 75% believe implemented strategies are helping staff incorporate AI into workflows.
Ethics, cybersecurity and the risks of rushing
Sreedhar also highlighted the dangers of rushing into AI adoption without proper frameworks. "A poorly implemented AI system is potentially a cybersecurity nightmare," he said. "You have to be extremely vigilant."
The report reveals 43% of respondents are concerned about the lack of a clear plan to manage AI's human impact, and 42% worry about the potential for misuse by malicious actors. Half of respondents say their organisation's AI governance includes data privacy and security controls, but 40% of SMBs admit they lack the ability to monitor employees' use of BYO AI tools.
From a cybersecurity standpoint, the skills gap can further expose businesses to threats. "If employees are not trained, they may unknowingly compromise data integrity," Sreedhar said. "Ignorance leads to issues."
Ethical leadership is also essential. "Leadership must lead by example," Sreedhar explained. "They need to set standards around fairness, transparency and accountability."
A comprehensive AI strategy, Sreedhar said, must include alignment with business goals, strong data governance, and continuous performance evaluation.
"Any rushed decision-making will lead to regret somewhere down the line," he said. "You have to take your time."