All Insights

Cal.com Just Closed Its Codebase. The Open-Source Security Playbook Is Being Rewritten.

CivSafe Team·April 17, 2026·7 min read

Two days ago, Cal.com — the open-source scheduling tool that tens of thousands of small teams, nonprofits, and independent businesses have been using as a free alternative to Calendly — quietly moved its production codebase from public to private. Five years of open development. Gone from GitHub overnight.

The reason they gave is worth sitting with, because it applies to every open-source tool your organization is running right now.

What Cal.com said

Co-founder Peer Richelsen: "Open source security always relied on people to find and fix any problems. Now AI attackers are flaunting that transparency."

CEO Bailey Pumfleet put it more bluntly: "Open source code is basically like handing out the blueprint to a bank vault. And now there are 100× more hackers studying the blueprint."

The specific trigger was AI security research published earlier this month showing that frontier AI models can take a public codebase, identify vulnerabilities that existed for decades without anyone finding them, and generate working exploits in hours. Not days. Hours. The research found a 27-year-old bug in OpenBSD — one of the most security-conscious operating systems ever built — and a 16-year-old vulnerability in FFmpeg that automated scanning tools had run past five million times without flagging.

The "many eyes make bugs shallow" assumption that made open source feel inherently safer than closed source? AI just broke it. There are now eyes that never sleep, never get tired, and can read an entire codebase faster than a human can read a README.

The community isn't taking this lying down

The response from the open-source community has been sharp, and not entirely wrong.

Discourse — which runs the most widely-used open-source forum platform and handles sensitive data for thousands of organizations — published a direct response within 48 hours: they're not going closed-source, and they think Cal.com's reasoning is flawed.

Their argument: AI models don't need source code to find vulnerabilities. They can probe compiled binaries. They can fuzz black-box APIs. They can test endpoints with no knowledge of the underlying implementation. Security by obscurity has been a known fallacy in infosec for decades, and closing your codebase doesn't mean attackers can't find your bugs — it just means the defenders lose the community's help finding them first.

There's also a pointed question critics keep raising about Cal.com's announcement: they simultaneously launched Cal.diy, an MIT-licensed open-source fork for hobbyists and developers who want to self-host. So the code is too dangerous to be public for enterprise calendars... but safe enough to be fully open for developers to run? That inconsistency suggests the decision is at least partly about risk segmentation and liability, not pure security.

Both sides have a point. That tension is actually the most important thing to understand here.

What's actually changed in the threat model

Here's the honest version: neither "keep it open" nor "lock it down" is the complete answer. What has genuinely changed is the timeline.

When a CVE gets filed against an open-source project, there used to be a window — sometimes weeks, sometimes months — between the public disclosure and widespread exploitation in the wild. Defenders used that window to patch, update, and deploy fixes before attackers built weaponized tools.

That window is collapsing. When an AI can go from "here's the public codebase" to "here's a working proof-of-concept exploit" in hours, the math changes for everyone. It doesn't matter if you're open or closed — what matters is how fast you can detect and patch.

Black Duck's 2026 Open Source Security & Risk Analysis report published earlier this year found that mean vulnerabilities per codebase more than doubled in one year — from 280 to 581. That's AI-assisted discovery on both sides of the equation: researchers finding more bugs, and attackers finding them too.

Why this matters more for small orgs than anyone will tell you

Large organizations running open-source software in production typically have it wrapped in layers: isolated network segments, intrusion detection, dedicated security teams watching for unusual behavior. When a new exploit drops, they have a playbook and people to execute it.

Small orgs running self-hosted open-source tools usually don't. And "self-hosted open-source" describes a huge portion of how small NGOs, public sector teams, and SMBs operate: self-hosted Nextcloud for file sharing, Mattermost or Rocket.Chat for internal comms, Outline or BookStack for internal wikis, Plausible or Umami for analytics, Gitea or Forgejo for code, Portainer for managing containers, Uptime Kuma for monitoring. Useful tools. Legitimately free. Often running on a VPS or a small VM with infrequent updates and nobody watching the logs.

Every one of those tools is now a faster-moving target than it was a year ago. Not because the software got worse — because the adversarial tooling scanning for vulnerabilities got dramatically better.

The specific risk for small orgs isn't targeted attacks. It's automated scanning. When AI-powered exploit tools can crawl the internet, identify servers running specific software versions, match them against known vulnerabilities, and attempt exploitation automatically — you don't need to be interesting to get hit. You just need to be running something old and exposed.

What to actually do

Update constantly, not occasionally. The patch window has shrunk. If a security update ships for any tool you're running, your window to apply it before exploitation attempts start is now measured in days, not weeks. Set up automatic updates for containers and packages where you can. Make "update the self-hosted stack" a weekly task, not a quarterly one.

Know what you're exposing to the internet. Cal.com's vulnerability is partly about exposure — a scheduling tool handles external bookings, which means it's internet-facing by definition. Run a quick audit: which of your self-hosted tools are accessible from the public internet vs. internal-only? Internal tools have a much smaller attack surface even if they're unpatched. External-facing tools need to be current.

Seriously consider managed SaaS for high-stakes data. If you're self-hosting primarily for cost reasons and the tool handles client data, credentials, or anything you'd be embarrassed to disclose — consider whether the savings are worth it. Managed SaaS providers patch faster and have dedicated security teams. For a 10-person NGO, the math often favors "pay $30/month and let someone else worry about this."

Subscribe to security feeds for your stack. Most self-hosted tools publish security advisories via RSS or GitHub releases. Subscribe to them for every tool you run. If you don't know the tool has a critical patch available, you can't apply it. This is a 20-minute setup that pays off within months.

Stop relying on "nobody would target us." That assumption was questionable before. Now that exploitation can be automated and scaled, it's not a real risk calculation. Automated scanning doesn't discriminate by organization size.

The bigger picture

The Cal.com debate is really a preview of a conversation the entire open-source community is about to have about what security looks like when AI is sitting on both sides of the equation. Discourse is right that obscurity doesn't solve the problem. Cal.com is right that the threat model has shifted.

For small organizations, the practical response isn't to stop using open-source tools — that's not realistic, and many of those tools are genuinely excellent. It's to treat your self-hosted stack with the same update discipline you'd apply to anything that touches your data.

The "set it and forget it" era of self-hosting is over.


We audit AI tooling stacks and security posture for small organizations — NGOs, public sector teams, and SMBs who are running more infrastructure than they realize and haven't looked at it recently. If you want a clear picture of what you're running and what's exposed, start here.

CivSafe — Strategic Innovation. Community Impact.