A recent survey, 'All eyes on securing GenAI', conducted by Zscaler, reveals the accelerated use and growing security implications of generative AI (GenAI) tools within Australian and New Zealand (ANZ) organisations. A striking 97% of respondents in the ANZ region reported using GenAI tools. However, at the same time, 85% view these technologies as potential security risks.
Despite the perceived risks of GenAI, Australian and New Zealand organisations lead the way in implementing precautionary measures with 85% having put into place AI security mechanisms. This is in stark contrast to approximately two-thirds of organisations globally that have introduced similar safeguarding measures, with an additional 31% planning to do so. Advanced GenAI tools like ChatGPT, while posing potential security threats, offer significant efficiency gains and innovative solutions to organisations.
Interestingly, IT teams, not general employees, are the primary drivers behind the implementation of GenAI tools. Heng Mok, Chief Information Security Officer, Asia Pacific and Japan at Zscaler, notes, "Despite mainstream awareness, it is not employees who appear to be driving the interest and usage, only 12% of respondents in Australian and New Zealand said it stemmed from employees. Instead, 51% said usage was being driven by the IT teams directly." He adds that this should reassure business leaders as it demonstrates that artificial intellgience tool deployment is overseen by IT specialists, ensuring data and customer security.
Mok also emphasised the need for businesses to strengthen their GenAI policies in response to the rapidly evolving landscape of GenAI tools, to prevent possible cyber attacks. "With the fast-paced nature of GenAI, it is essential that businesses continue to prioritise educating employees and implementing security measures in response to rapidly changing technologies," he said.
Furthermore, given the increasing interest in GenAI typified by 45% of respondents in the Australian and New Zealand region expecting a significant rise by the end of the year, organisations are encouraged to expedite the drafting of GenAI acceptable use policies and implementing zero-trust architecture to authorise only approved AI and users. Carrying out comprehensive security risk assessments for new AI applications is also paramount, along with establishing a detailed logging system for tracking artificial intelligence activities and enabling zero trust-powered Data Loss Prevention measures to protect against data exfiltration.
The research by Zscaler encompassed 901 IT decision makers across 10 different markets including the Australian and New Zealand region, and can offer significant insights into the proper use and security management of evolving GenAI tools.