Vercel recently experienced a security breach that allowed unauthorized access to their internal systems due to a combination of factors involving an AI tool adoption by an employee and a subsequent infostealer attack on an AI vendor. This breach exposed vulnerabilities in their OAuth grant process, leading to a path to Vercel’s production environments that had not been properly reviewed.
The breach, which was confirmed by Vercel on Sunday, prompted the involvement of Mandiant and law enforcement in ongoing investigations. The company collaborated with GitHub, Microsoft, npm, and Socket to ensure that none of Vercel’s npm packages were compromised. Fortunately, after an audit, it was determined that Vercel’s npm packages remained uncompromised.
The entry point for the breach was identified as Context.ai, where a Vercel employee had installed a browser extension and granted broad OAuth permissions using a corporate Google Workspace account. When Context.ai was breached, the attacker gained access to the employee’s Workspace account, allowing them to escalate privileges within Vercel’s environments by accessing non-sensitive environment variables.
The attacker, described as “highly sophisticated” by Vercel’s CEO, Guillermo Rauch, was believed to have been accelerated by AI. The breach also involved a second OAuth grant tied to Context.ai’s Chrome extension, which granted access to users’ Google Drive files.
Further investigation revealed that the breach originated from a Lumma Stealer infection on an employee’s machine at Context.ai, leading to the compromise of various credentials, including Google Workspace logins, Supabase keys, and more. Context.ai confirmed that the breach affected their AI Office Suite consumer product, not their enterprise Bedrock offering.
The breach highlighted several governance failures, including unaudited AI tool OAuth scopes, inadequate environment variable classification, and a lack of detection coverage for infostealer-to-supply-chain escalation chains. The extended dwell time between vendor detection and customer notification was also a concerning factor.
In response to the breach, Vercel recommended actions for security directors, such as inventorying AI tool OAuth grants, defaulting to non-readable environment variables, and cutting detection-to-containment SLAs to below the average eCrime breakout time.
Security directors were advised to run IoC checks for specific OAuth App IDs associated with Context.ai to determine if their environment was affected. The breach underscored the risks associated with AI agent OAuth integrations and the importance of thorough security measures to prevent similar incidents in the future.





Be the first to comment