All Insights

AI Is DDoSing the Maintainers Your Open Source Stack Depends On

CivSafe Team·April 22, 2026·7 min read

Yesterday, the Open Source Security Foundation formally launched a community survey asking maintainers to document the damage from AI-generated vulnerability reports — also called "AI slop." The survey is a small thing. What it represents is not.

The fact that a major security foundation has now allocated staff and a formal working group to measure this problem tells you how serious it's gotten. And it's gotten serious in a way that most small organizations have no idea about, because the damage is invisible until something breaks.

What's happening

Here's the attack pattern: someone runs a public codebase through an AI model, asks it to find potential security vulnerabilities, gets back a list of plausible-sounding issues, and files them all as bug reports — often through bug bounty programs that pay per submission. The reports look real. They use correct technical language. They cite relevant CWEs. They reference actual functions in the code. Most of them are wrong.

Not wrong in a way that's obvious. Wrong in a way that takes an experienced maintainer 20–40 minutes to work through before they can confirm: this isn't a real vulnerability, it's an AI hallucinating a threat.

Scale that across a project that gets hundreds of reports per month, and you've effectively DDoS'd the maintainers. That's Daniel Stenberg's phrase — he runs curl, the file transfer library that's in basically every internet-connected device on the planet — and he used it deliberately.

In January 2026, curl ended their bug bounty program entirely. Not paused it. Ended it.

"The incentive to submit crap is too high," Stenberg wrote. Back when the program launched, legitimate vulnerability confirmations ran above 15%. By late 2025 that had dropped below 5%. By the time they shut it down, it was closer to one in thirty.

Why this matters for your organization

Most small orgs think about their open source risk in terms of known CVEs — the public vulnerability database where security researchers post confirmed issues. You run a scanner, you see what's vulnerable, you patch it. That model has always been imperfect, but it was functional.

Here's what's changing: the pipeline from "vulnerability exists" to "vulnerability gets a CVE number and lands in the scanner" runs through the same maintainers who are now drowning in AI-generated noise. When a volunteer maintainer has 200 reports in their queue and 90% of them are slop, the real ones are waiting weeks longer to get triaged. Some get missed entirely.

Node.js documented this concretely. During a major holiday period, their maintainers came back to 30+ AI-slop reports filed while nobody was watching. The legitimate reports were buried in the backlog. That's not a theoretical problem — that's a real window where a real vulnerability is sitting unfixed while the humans responsible for it are trying to find it inside a haystack of AI junk.

The problem isn't just patch velocity. When maintainers burn out and quit, projects get abandoned or handed off to whoever picks them up next. The "many eyes make bugs shallow" principle that makes open source security work depends on there being people with enough goodwill left to actually look. AI slop is eroding that goodwill systematically, and the tools most at risk are the ones that have been around long enough to have bounty programs — meaning the foundational infrastructure libraries your stack is built on.

The response so far

In March, the Linux Foundation raised $12.5 million from a coalition of major tech companies to help open source projects manage the security load AI is creating. The money is real and the intent is genuine. It'll fund better tooling for maintainers to detect and filter low-quality reports, improve triage infrastructure, and help projects that have lost maintainers find new ones.

That's the right kind of response. It's also going to take 12–18 months to build and deploy. The April 21st OpenSSF survey is the beginning of understanding the scope — which means we're still in the "measuring the problem" phase, not the "deploying solutions" phase.

For the tools your organization depends on today, the cavalry isn't here yet.

And here's the quiet irony: the OpenSSF working group has noted that there's no reliable technical indicator for AI-generated vulnerability reports. Detection is currently based on what they diplomatically called "vibes and maintainer intuition." The same property that makes AI slop hard to detect makes it cheap to produce at scale. This particular asymmetry isn't going to resolve itself quickly.

What to actually do

The practical response isn't to stop using open source software — that's not realistic, and most open source tools are still well-maintained. It's to be more intentional about which tools you're depending on for critical infrastructure.

Check maintainer health, not just CVEs. Before you decide to keep using a package for something important, spend two minutes on its GitHub page. When was the last commit? How long do issues sit open before someone responds? Is the same person doing all the work? A tool with a single exhausted maintainer and 400 unreviewed issues is a different risk profile than the same tool two years ago.

Use deps.dev or socket.dev for dependency health signals. These tools surface maintenance activity, contributor counts, and security posture in one view. They're free. Socket specifically tracks whether packages have had unusual publishing behavior — a useful early signal for supply chain risk.

Prefer organizations over individuals for critical dependencies. For the libraries that sit under your authentication, data handling, or any externally-facing tool — prefer packages maintained by organizations or foundations over solo maintainer projects. Not because individual maintainers are worse engineers, but because organizations are more resilient to the burnout problem.

Pin critical packages and review upgrades. We've written about this before in the context of supply chain attacks, but it applies here too: if you're running pip install --upgrade everything in your CI, you're accepting whatever was last published. Pin versions, test upgrades in staging, and make upgrading a deliberate decision rather than a default.

If you depend on a project, contribute. This sounds like generic advice but it's actually the specific fix for the problem. Bug bounty programs exist because triaging reports takes time maintainers don't have. If your organization depends on a tool and you have a developer who could spend a few hours per quarter reviewing incoming reports, that's directly valuable. Several major projects are explicitly asking for exactly this.

The bigger picture

The open source ecosystem has always run on volunteer labor. That's not a bug — it's produced some of the most reliable software on the planet. But it's always been fragile in a specific way: the labor is motivated by genuine interest and community goodwill, and both of those things can be depleted.

AI slop isn't a clever attack on any specific piece of software. It's a tax on volunteer attention at scale. And at some point, even the most dedicated maintainers start doing the math and deciding their weekends aren't worth it.

The Linux Foundation funding is a meaningful attempt to rebalance that equation. The survey that launched yesterday is an attempt to understand it. Neither of those things changes the reality for your stack this week — which is that the tools you rely on are maintained by people under a kind of sustained pressure the ecosystem has never seen before.

Knowing that changes how you should treat your dependencies.


We help small organizations — NGOs, public sector teams, SMBs — audit what they're actually running and build the lightweight habits that reduce supply chain exposure. If you want to know what your stack looks like and where the risks are, reach out.

CivSafe — Strategic Innovation. Community Impact.