isVerified launches AI voice deepfake defence for execs
The cybersecurity company, isVerified, has emerged from stealth and begun speaking publicly about its security technology for detecting AI-generated voice impersonation during executive communications.
The Tel Aviv-based company said it has spent the past year developing a platform focused on vishing. Attackers use synthetic or cloned voices to impersonate senior executives and other trusted figures. The company said these attacks increasingly target enterprise leadership.
Voice cloning tools have become more accessible during the past two years, driven by advances in generative AI. isVerified said attackers can now replicate speech patterns, tone, cadence and accent with minimal source material. The company cited public recordings, intercepted calls and short voice fragments as potential inputs.
Security teams have long concentrated on controls for email, endpoints, networks and identity systems. isVerified said vishing attacks often bypass these defences because they target human decision-making rather than technical vulnerabilities. It said organisations have reported incidents where fake calls from executives led to wire transfer approvals, overrides of internal controls, staff manipulation, or the extraction of sensitive information.
One recent example involved attempts to impersonate senior US government officials. isVerified pointed to vishing campaigns in which threat actors posed as voices represented as the White House Chief of Staff and the US Secretary of State. The calls targeted congressional representatives, governors and senior state officials. The company said recipients believed the calls were legitimate because of voice accuracy and contextual knowledge.

The team at isVerified said this environment exposes a gap in enterprise security. Many security systems do not authenticate who is speaking in real time. They also do not assess whether a voice has been generated or manipulated.
"Voice is now the most vulnerable critical attack surface," said Roi Carthy, Founder and CEO, isVerified.ai.
The firm, isVerified, said its platform validates speaker authenticity during sensitive inbound and outbound communications. It also detects anomalies consistent with AI-generated or manipulated voice content. The company said the system runs in the background and requires minimal interaction from executives.
Moreover, isVerified positioned the product against voice biometrics aimed at consumers, call-recording tools and awareness training programmes. It said executive environments place different demands on authentication technology. The company described those settings as time-sensitive, with high stakes and limited tolerance for workflow disruption.
Technology focus
The company described several elements in its platform. It said it provides real-time one-to-one voice authentication for sensitive inbound and outbound communications. It also said it uses proprietary detection methods for synthetic and manipulated speech. isVerified said the approach looks for indicators associated with AI-generated voice content, rather than relying only on static voice matching.
The company said it has designed the system around minimal executive interaction. It also said it targets enterprise and public sector deployments in regulated and high-risk organisations with complex governance and operational requirements.
Carthy said the company remained quiet while the threat landscape and product matured.
"We deliberately stayed in stealth while the technology and threat landscape matured," said Roi Carthy, Founder & CEO.
Carthy also described the threat as already operational at scale.
"Voice deepfakes are no longer experimental - they are reliable, scalable, and increasingly convincing. Our approach was to engineer a system that works in real-world executive conditions, not a lab environment," said Carthy.
The company said early deployments focused on communications involving the C-suite, boards, legal functions and finance. It described these areas as higher risk due to the authority levels involved and the potential for urgent decisions based on verbal confirmation.
Commercial rollout
Furthermore, isVerified said it validated the approach with users during its stealth period. It said it refined deployment models and focused on reducing the need for behavioural change among executives. The company said this mattered for adoption and effectiveness.
With its public launch, isVerified said it will expand commercial availability. The company said it will pursue both direct sales and channel-led distribution. It plans to target enterprises and public-sector organisations across North America, Europe and the UK, and APAC and ANZ.
"We deliberately stayed in stealth while both the technology and the threat landscape matured," said Carthy. "Voice deepfakes are no longer experimental curiosities. They are reliable, scalable, and increasingly convincing. Waiting allowed us to engineer a system that works in real-world executive conditions - noisy environments, imperfect connections, time pressure - not a controlled lab."