Yesterday, Vercel — the cloud platform that hosts a significant chunk of the web — confirmed a security breach. Customer API keys, source code, and credentials for services like Supabase, Datadog, and Authkit were stolen and are now reportedly being sold for $2 million.
Here's the part worth sitting with: this didn't happen because Vercel wrote bad code or missed a patch. It happened because one employee, at some point, signed up for a third-party AI productivity tool using their corporate Google account and clicked "Allow All" on the OAuth permissions prompt.
You've done this. Your team has done this. Probably multiple times in the last six months.
What Actually Happened
The tool at the centre of this is Context.ai — an AI Office Suite that plugs into your Google Workspace and helps with drafting, summarizing, and organizing. Useful-sounding product. Reasonable thing for a busy employee to try.
But Context.ai got compromised. Not through a sophisticated zero-day — through one of their own employees downloading Roblox exploit scripts on what was probably their personal machine. Those scripts were laced with Lumma Stealer, a credential-harvesting malware. From there, the attacker worked their way into Context.ai's AWS environment and pulled the OAuth tokens that Context.ai held for its users.
Including the one belonging to the Vercel employee who had connected their corporate Google account.
With that OAuth token, the attacker now had access to that employee's Google Workspace. And from inside Google Workspace, they could reach Vercel's internal systems and the environment variables — API keys, database URLs, service credentials — stored there.
The chain looked like this: a stranger's home computer, to an AI startup's AWS environment, to a Vercel employee's Google account, to Vercel's production infrastructure, to the credentials your apps depend on. Each link was an AI tool someone trusted.
ShinyHunters, the threat group claiming responsibility, is now selling the data on BreachForums for $2M asking price.
Why This Is Different From the AI Security Stories You've Already Read
Most AI security coverage focuses on what's in your code — leaked API keys in GitHub, backdoored Python packages, prompt injection in agents. Important stuff. But this attack didn't touch code at all.
It exploited the authorization chains that AI productivity tools have quietly been building across the business world. Every time someone on your team signs into an AI tool with their Google or Microsoft account and approves a list of permissions, that tool can now act on their behalf. If the tool's vendor gets compromised — or even if a single employee at that vendor gets phished — that access comes with them.
The "Allow All" button on an OAuth consent screen feels like a one-time thing. It's actually an ongoing grant of access that stays live until someone explicitly revokes it.
Count Your Exposure
Go right now to your Google Workspace admin console: Security → API Controls → Manage Third-Party App Access.
You'll see a list of every app that has OAuth access to your organization's accounts. In most small organizations that have been active with AI tools in the last year, this list has somewhere between 15 and 40 entries. Many of them are tools people tried once. Some of them have "can see and modify all files in Drive" or "can send email on behalf of users."
Each one of those entries represents a vendor whose security posture you implicitly trusted when someone clicked "Allow." Most of them you've never assessed. Most of them you couldn't describe their incident response process if someone asked.
That's the actual attack surface. Not your code. Your OAuth list.
What to Do This Week
Audit and prune your OAuth apps. The Google Workspace admin audit takes about twenty minutes. Look for apps your team no longer uses and revoke them. Look for apps with broader permissions than they need and see if there's a more restricted option. Anything with access to email, Drive, Calendar, or admin functions that you can't immediately justify — cut it.
For Microsoft 365 shops, the same audit lives in the Entra admin center under Enterprise Applications.
Establish a norm around OAuth approval. Right now, individual employees can approve third-party OAuth apps and grant them access to the entire org's Google Workspace. That's probably not the right default. In Google Workspace admin, you can change this so that new third-party apps require admin approval before being granted access. It adds a small amount of friction. It also means one person's "let me try this AI tool" doesn't become the organization's attack surface.
Treat AI productivity tools like you treat SaaS vendors. Specifically: before connecting a new AI tool to corporate accounts, spend five minutes looking at their security page. Do they have SOC 2? Do they publish a trust page? Have they had incidents before, and how did they respond? Context.ai's breach started months ago and Vercel's customers are finding out today. That lag is normal. It means the tools your team connected last year may have already been inside a breach you haven't heard about yet.
For Vercel customers specifically: Check whether you have environment variables that weren't marked as "sensitive" in your Vercel project settings. Sensitive variables are stored in a way that prevents them from being read, even by Vercel's own infrastructure. If your Supabase, Datadog, or Authkit keys weren't marked that way, rotate them now. Vercel posted a knowledge base article on the incident with specific guidance.
Mark everything sensitive. While you're in there: if you use Vercel or any other platform that has a "sensitive" or "encrypted" flag on environment variables, turn it on for everything. Production database URLs, API keys, webhook secrets — all of it. The extra protection is there; most people just don't use it.
The Structural Problem Worth Naming
This attack is a preview of a category of risk that's going to get worse before security tooling catches up. The AI productivity tool market grew explosively in 2024 and 2025. Hundreds of small startups built tools that plug into Google Workspace, Slack, Notion, GitHub. Many of them are built by small teams moving fast, with security as a secondary priority. Some of them will get compromised.
The access those tools have to your organization doesn't disappear when you stop using them. It sits there, live, until you explicitly revoke it.
For a 15-person NGO or a 40-person business: you don't have a dedicated security team watching for this. You're trusting that every AI startup you've ever tried has their access management right. The Vercel breach is a reminder that's not a safe assumption.
The fix isn't to stop using AI tools — the productivity gains are too real. The fix is to treat OAuth access like a database of credentials that needs to be audited and maintained, not clicked through once and forgotten.
This is one of the things we specifically look at when we come in: what does your team's AI tool footprint actually look like, and where are the access grants that nobody's reviewed. If yesterday's news put a knot in your stomach about what's connected to your accounts, that's worth a conversation.