SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers
Realistic business office employees watching computer screens digital imposter

Deepfakes & AI scams prompt urgent call for workplace vigilance

Mon, 25th Aug 2025

Experts have highlighted the growing risks associated with deepfakes and AI-powered scams as part of National Scams Awareness Week.

With rapid advancements in artificial intelligence, both organisations and individuals face a more complex threat landscape, especially as deepfakes and AI-generated impersonations become increasingly mainstream.

Evolving threats

Ashley Diffey, Vice President Australia and New Zealand at Ping Identity, said that the ability of artificial intelligence to generate deceptive content has changed how threats are perpetrated online and made it more difficult to know what information can be trusted.

As artificial intelligence rapidly evolves, so too does the threat landscape. Deepfakes and AI-generated impersonations have become mainstream for bad actors, making us question everything we see, hear or interact with online. This Scam Awareness Week, when trust can only come from what can be verified, businesses that infuse verification into every step of the identity journey, from onboarding, to access permissions, and even liveness detection, will be the ones that earn customer trust long term

Diffey's remarks come at a time when businesses are under pressure to enhance their security practices to protect consumers and staff from scams that are harder to detect due to digital impersonation technologies.

Deepfakes and the workplace

Les Williamson, Regional Director Australia and New Zealand at Check Point Software Technologies, noted that the increasing quality and availability of AI tools is making deepfakes more accessible to cybercriminals.

Deepfakes have surged into commercial and consumer consciousness alike owing to their growing sophistication. The ability to mimic a human at a higher quality than is now much more possible than ever before. That's because access to the AI tools used to create deepfakes is better and this - along with low cost barriers to entry - means that convincing fakes can be deployed at scale. The significant business impacts associated with malicious deployment of deepfakes have people and businesses asking what they can do to protect themselves and their operations. How can they work out whether the person on the other end of a videoconference is real and not an AI creation?

Williamson explained that as deepfakes impact both commercial and consumer settings, more vigilance and practical steps are needed, particularly in professional environments where videoconferencing is common.

He detailed the necessity for employees to adopt habits similar to those developed for email safety when participating in video calls or remote meetings.

In order to avoid scams in the workplace, enterprises need people to be vigilant and to perform some common-sense checks. In all situations people tend to weigh up what they're seeing and make certain risk assessments and judgements. In the same way people currently check the veracity of an email or its contents - cross-checking the sender ID, hovering over a url or attached file, examining the style and grammar - they can benefit from applying the same type of approach to videoconferencing engagements today. This triangulation of clues and risk factors is a kind of "multi-factor authentication" that we now need to perform more consciously in workplace settings.

Supporting vigilance

Williamson also pointed to the ongoing need for both individual caution and institutional support in keeping ahead of AI-related threats.

Employees should continue to be cautious and stay current with the evolution of AI technology, to deal with the threat of encountering a deepfake. Their efforts can be ably supported by organisations implementing cyber security solutions, including robust email protections, that can detect and prevent many malicious meeting invitations from being delivered to inboxes in the first place. Given the potential cost of the threat, it's important to have well-rounded protections in place.

Calls for up-to-date training, cyber security solutions, and the verification of user identities coincide with the increased sophistication of scams facilitated by AI, reinforcing the advice to maintain vigilance in both personal and workplace settings.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X