Last week, researchers at Georgia Tech's Systems Software & Security Lab published their March numbers for the Vibe Security Radar — a project that's been quietly tracking vulnerabilities introduced by AI coding tools since May 2025. The number of confirmed CVEs directly caused by AI-generated code: 35 in March alone. That's up from 15 in February and 6 in January.
Three months. Six to thirty-five. That's not a trend line — that's a slope.
Here's what makes this more than a "vibe coding is risky" story. Most of the covered vulnerabilities aren't in code your team wrote. They're in open-source packages — tools your team installed and trusted without reading a line of.
What the Vibe Security Radar actually tracks
The SSLab team built a pipeline that pulls from public vulnerability databases (CVE.org, the National Vulnerability Database, GitHub's advisory database), finds the commit that introduced each vulnerability, then traces whether that commit has the fingerprint of an AI coding tool — a co-author tag, a bot email address, a known tool's commit signature.
Of the 74 confirmed cases as of the March report, the tools flagged include GitHub Copilot, Cursor, Devin, and others. The researchers are careful to note that attribution is skewed: tools that leave metadata traces show up more often, while tools like Copilot's inline suggestions leave no trace at all. The real number is "almost certainly" 5 to 10 times higher — somewhere between 400 and 700 cases across the open-source ecosystem, just in projects where they can see enough metadata to check.
Why this matters if you don't write code
Most of the orgs we work with don't have engineering teams. They use SaaS tools, WordPress plugins, Python scripts they found somewhere, npm packages their last contractor installed. None of them think of themselves as running "software."
Here's the problem: the supply chain for all of that software is now substantially written by AI tools. The developer who maintains a popular authentication library, the freelancer building your NGO's intake form, the agency that built your data pipeline — all of them are using AI coding assistants. At this point, according to researchers, essentially every organization is running some AI-assisted code, whether they know it or not.
The Vibe Security Radar team pointed to one project in their data that has over 300 security advisories and is known to rely heavily on AI-generated code. Most of the AI tool traces had been stripped by the authors. Most of the vulnerabilities never received public CVE identifiers. That's the part that doesn't show up in any dashboard.
The governance gap is wide open
A stat worth sitting with: 99% of organizations have AI code tools being used inside them. Only 29% have any governance around it. And 15% of organizations have explicitly banned AI code tools — but still have 99% adoption.
You can ban something and have it everywhere anyway when developers can install it in five seconds.
For small orgs this creates a specific kind of exposure. You probably don't have a security team reviewing dependencies. You probably don't have a budget for a comprehensive audit. When a CVE gets filed for a package you use, you find out when your SaaS vendor emails you in six weeks, if at all.
The UK's National Cyber Security Centre called this out directly at RSAC on March 24. Their director asked the industry to build better guardrails — but the guardrails being asked for are vendor-level changes that will take years. The CVEs are being filed now.
What a small org can do today
You don't need to audit every dependency you've ever installed. You need a few lightweight habits:
Check before you install. Before adding a new package or plugin, spend 30 seconds on deps.dev or osv.dev. They surface known CVEs and maintenance signals. It takes less time than reading the README.
Enable GitHub's dependabot alerts. If any of your code lives in a GitHub repo — even a single website repo — turn on dependency alerts. It's free and takes two minutes to enable. You'll get notified when a package you're using has a known vulnerability.
Add one question to your vendor review process. "Do you track CVEs in your dependencies, and how do you notify customers?" If a vendor can't answer this, that's a signal.
Know what you're running. Ask whoever manages your tooling to give you a list of the AI coding tools in use. Not to ban them — to know about them. The first step in any supply chain risk conversation is inventory.
The broader picture
The Vibe Security Radar data is genuinely valuable because it makes something concrete that felt abstract. AI coding tools are fast and capable and that's real. They're also producing vulnerable code at a measurable rate, and that rate is climbing. That's also real.
This isn't a reason to stop using AI tools. It's a reason to pair them with the same hygiene you'd apply to any fast-moving technology: awareness, lightweight monitoring, and a clear owner for "who checks that what we're running is still safe."
Most small orgs don't have that owner yet. In a lot of cases, we help them figure out who it should be and what it should actually look like — without adding a lot of overhead to teams that are already stretched.
If that's a conversation worth having, reach out.