SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers
Flux result 54dc7eb7 cbef 4497 ab08 29a1ccb22277

Vercel breach linked to compromised Context.ai integration

Wed, 22nd Apr 2026 (Today)

Vercel has disclosed a security incident involving unauthorised access to some internal systems, linking the intrusion to a compromise at third-party AI provider Context.ai.

The attack exposed a chain of dependencies between a small external AI tool and Vercel's core corporate systems. It began when an attacker compromised Context.ai, which a Vercel employee had used with their enterprise Google Workspace account.

According to Vercel's security bulletin, the attacker used access to the Context.ai integration to take over the employee's Google Workspace identity. The intruder then accessed some Vercel environments and environment variables that were not marked as "sensitive".

Vercel says environment variables flagged as "sensitive" are stored so they cannot be read in plain text, and it currently has no evidence that those values were accessed. It has engaged Mandiant, other cybersecurity firms, industry peers and law enforcement, and has contacted Context.ai as part of the wider investigation.

The incident affected a limited subset of Vercel customers whose credentials were compromised. Vercel has contacted those customers directly and recommended they immediately rotate credentials and other secrets that may have been stored in non-sensitive environment variables.

Vercel continues to investigate whether any data was exfiltrated and says it will contact additional customers if it uncovers further evidence of compromise. It has also published an indicator of compromise for the Google Workspace OAuth application associated with the Context.ai tool and recommended that Google Workspace administrators check for its presence.

The breach has intensified scrutiny of how software and AI supply chains rely on OAuth tokens and application integrations that often fall outside traditional security monitoring. Security specialists say the Vercel case shows how an obscure third-party app can become a stepping stone into a much larger corporate environment.

Cory Michal, chief information security officer at AppOmni, said the incident fits a pattern security teams increasingly recognise in cloud and SaaS environments.

"What's most noteworthy about this attack is that it appears to have started as a SaaS integration supply-chain compromise and then cascaded into the takeover of a trusted Vercel user and access to internal systems."

"According to Vercel, the attacker first compromised a third-party AI tool, then used that access to take over an employee's Google Workspace account and pivot into Vercel environments. That reflects a growing attacker playbook: abusing trusted SaaS integrations and identity connections to move from one app into a much larger enterprise environment."

"The bigger issue is the growing risk posed by OAuth tokens and the often invisible web of third-party SaaS integrations connected to core business platforms. Once a user authorises one app, that trust can extend into email, identity, CRM, development and other systems in ways many organisations do not fully inventory or monitor, making a single compromised integration a powerful pivot point."

"That risk is no longer theoretical. Vercel says this incident began with a compromised third-party AI tool, and Google Threat Intelligence has separately warned about widespread campaigns abusing stolen OAuth tokens tied to third-party SaaS integrations to access downstream environments and harvest sensitive data. That underscores how often this attack path is now being exploited."

"The key lesson is that third-party risk management cannot stop at reviewing a vendor's SOC 2 report or penetration test results. Organisations need continuous visibility into how third-party applications are connected across their SaaS estate, what OAuth grants and integration tokens they hold, and how those relationships could be abused if one provider is compromised."

"Just as important, companies need strong log collection and analysis across these platforms so they can detect suspicious activity quickly and understand how an attacker may be moving through interconnected SaaS environments," Michal said.

Vercel says its services remain operational and that it has deployed additional protective measures and monitoring across its infrastructure. It has advised customers to review account activity logs, investigate recent deployments and rotate any secrets stored in non-sensitive environment variables.

Security researchers say the case also highlights how the rapid adoption of AI tools in development, sales and support workflows can create new attack paths that bypass multi-factor authentication and conventional login monitoring. OAuth-based connections between corporate accounts and AI software often persist in the background long after initial setup while remaining largely invisible to central security teams.

Yagub Rahimov, chief executive officer at Polygraf AI, said the Vercel incident showed how much impact a single overlooked integration can have on a wider enterprise environment.

"One employee. One AI app. 'Allow All.' That's how Vercel got breached."

"The employee signed up for Context AI's app using their enterprise account and granted broad Google Workspace permissions. When that OAuth token was stolen, the attacker did not need credentials or need to bypass MFA. They simply used a valid token exactly as it was permitted to be used."

"The Salesloft-Drift breach in late 2025 worked the same way: attackers stole OAuth tokens from an integration provider and used trusted connections to move straight into hundreds of customer environments without triggering a single login alert."

"The technical problem is that OAuth tokens granted to third-party apps fall outside most organisations' detection scope. They do not appear in login logs. They do not trigger MFA prompts. Context AI was compromised a month before anyone at Vercel knew there was a problem, and CrowdStrike apparently did not flag the OAuth tokens as part of their investigation scope."

"The token just kept working silently, with whatever permissions the employee gave it on day one. It's the same problem we see all the time at Polygraf AI: AI tools quietly holding OAuth access to corporate accounts that nobody is watching. The breach surface is not your perimeter anymore. It's every OAuth grant your employees ever clicked through," Rahimov said.