Tomorrow morning, hundreds of NHS England GitHub repositories will quietly disappear from public view.
Internal guidance note SDLC-8, issued April 29, ordered the UK's national health service to make all source code repositories private by default, effective May 11. Public access is now allowed only under "explicit and exceptional need," subject to Engineering Board approval.
The stated reason: a new generation of AI-powered code analysis tools can now ingest large codebases and surface vulnerabilities automatically, at scale. NHS England's internal guidance warns that public repositories "materially increase the risk of unintended disclosure of source code, architectural decisions, configuration detail, and contextual information that may be exploited."
That concern is legitimate. The response is not.
What's Actually In Those Repos
Here's where this gets complicated. NHS sources told The Register that the vast majority of the affected repositories contain "nothing that could realistically lead to a security incident." Documentation. Architecture diagrams. Front-end admin apps for managing clinic appointment times. Not patient records. Not cryptographic keys. Not production infrastructure.
Former NHSX adviser Terence Eden put it plainly: "Is it possible that AI will scan a repo and find a bug? Yes, 100 per cent likely. Is that going to be a bug that causes a security issue in a live NHS service somewhere? Almost certainly not."
The NHS has hundreds of repos. But locking all of them down because a few might be sensitive is the security equivalent of putting a padlock on your filing cabinet after someone stole your car.
The Open Source Community Pushed Back Fast
The Free Software Foundation Europe (FSFE) moved within days. Their official statement:
"Depublishing public code is not a security strategy. 'Security through obscurity' has been debunked as a security measure for a long time. Making repositories private does not protect NHS systems. It only limits who can help find and fix problems."
That's FSFE Senior Policy Project Manager Johannes Näder — and he's right. Hiding code from the public doesn't mean attackers can't find your vulnerabilities. It just means your community of defenders can't see them either.
An open letter opposing the closure gathered 74 signatories within days. The FSFE is calling on UK citizens to contact their MPs. The principle they're invoking: "Public money, public code." If public funds built the software, the public should be able to read it.
This isn't just philosophical. Open source code benefits from community scrutiny — researchers, developers, peer organisations running similar tooling. Close it off and you've traded that review for false confidence.
Why This Is Coming for Your Org
If you're running an NGO, a public sector team, or a mid-size business in Canada, this might feel like a distant UK problem. It's not.
The NHS just handed every nervous IT manager and compliance officer a template. Within months, someone in your organization is going to look at your public GitHub projects, your shared codebases, your internal tooling — and say "given what AI can do now, shouldn't we lock these down?"
That conversation is coming. And unless you have a framework for answering it that goes beyond "yes, close everything," you're going to end up with a worse security posture than where you started.
The actual risks AI-powered scanning creates are more specific than "someone might read our code":
- Exposed service endpoints: A local AI inference server accidentally left on an open port is a real, concrete risk. A private GitHub repo is not.
- Committed secrets: If passwords, API keys, or tokens ever got committed to a repo — that's a training and process problem, not a visibility problem. Closing the repo doesn't undo the exposure.
- Architectural leakage through comments: If your repo history reveals how your production infrastructure works, that's a code hygiene problem. The fix is hygiene, not obscurity.
- Vulnerable dependencies: AI can absolutely scan public repos for outdated packages with known CVEs. But so can
npm auditorpip-audit, which you should be running regardless.
None of these are solved by making your repos private. They're solved by fixing the underlying issues.
What a Proportionate Response Actually Looks Like
Rather than blanket closure, here's how a thoughtful small org should respond to AI-powered vulnerability scanning:
Audit what's in your public repos — actually audit them. Not all repos carry the same risk. A static documentation site and a payments integration library are completely different conversations. A blanket policy doesn't distinguish between them.
Stop secrets from hitting any repo, public or private. Use .gitignore properly, run a secrets scanning tool (Gitleaks, TruffleHog, or GitHub's built-in scanning are all solid), and add pre-commit hooks that catch credentials before they land. This matters for private repos too.
Scan your actual network attack surface. AI tools finding open ports and exposed APIs are a bigger risk than anyone reading your source code. Run nmap against your own infrastructure. Know what's actually reachable from the public internet before you worry about GitHub.
Don't confuse obscurity with protection. Attackers can run AI against your deployed services, your API responses, your public interfaces. They don't need your source code. If your security model depends on nobody understanding how your system works, that's not a security model.
Get a real risk assessment — not a compliance checkbox. An actual look at what you have, what's exposed, and what an attacker would realistically target. Most small org attack surfaces are pretty well understood once someone looks at them with fresh eyes.
The NHS will spend the next six months managing repo visibility permissions and fielding "exceptional access" requests from teams who need their own code. Their actual attack surface won't have changed.
The Pattern Worth Recognising
This is becoming a recurring story: a new AI capability makes something scary easier, and the institutional response is to restrict rather than fix.
AI-generated phishing got better, so orgs locked down email. Deepfakes got more convincing, so people restricted video tools. Now AI-powered code analysis is getting genuinely capable, and the instinct is to close the repos.
The restriction impulse is understandable. It's also wrong, for the same reason every time: visibility isn't the same as risk. Hiding something from view doesn't make it safer — it just removes the peer review and community defence while leaving the underlying vulnerability intact.
The FSFE said it cleanly: "Making repositories private does not protect NHS systems. It only limits who can help find and fix problems."
That framing applies everywhere. If your compliance team is about to make this call, it's worth getting a second opinion from someone who actually understands the threat model before you close everything off and declare the problem solved.
We run these kinds of assessments for small and mid-size orgs regularly — separating real exposure from compliance noise, and figuring out where a few hours of actual work buys you meaningfully better security. If this conversation is coming for your team, we're easy to reach.