Here's something nobody in the "AI transformation" conversation is talking about.
GitGuardian just dropped their annual State of Secrets Sprawl report (March 30), and it's the most uncomfortable reading we've done in months. The headline number: 28.65 million hardcoded secrets — API keys, passwords, tokens — were pushed to public GitHub in 2025. That's a 34% jump year-over-year. The largest single-year increase they've ever recorded.
But buried in the report is the finding that should matter most to your org: teams using AI coding assistants are leaking credentials at twice the rate of everyone else.
That 2x figure is not a rounding error. Baseline secret leak rate for all public GitHub commits: 1.5%. For commits co-authored by AI coding tools: 3.2%. Two hundred and thirteen percent of baseline. And it's getting worse.
Why AI Tools Are Making This Worse
The short version: AI coding assistants are very good at writing functional code quickly. They are not thinking about your secrets.
When a developer asks an AI tool to scaffold an integration — "write me a script that connects to our OpenAI API and summarizes these intake forms" — the tool will often include the API key inline in the example code. Developer sees it working, commits it, moves on. The key is now in version history forever, even if they delete it in a later commit.
But it's not just about developers being careless. The report found something newer and more specific: 24,008 secrets were found in MCP configuration files on public GitHub, with 2,117 of those confirmed live. If that acronym is new to you: MCP (Model Context Protocol) is the standard that lets AI assistants connect to external tools and data sources. Every major AI coding tool supports it. And every tutorial and setup guide for MCP currently tells users to put their API keys directly in a config file.
Those config files are getting committed to public repos. With live credentials in them.
Eight of the ten fastest-growing categories of leaked secrets in 2025 were tied to AI services — OpenAI, Mistral, Cohere, and similar. Credential leaks for AI services specifically grew 81% in one year, reaching over 1.27 million incidents. One data point from the report: 113,000 DeepSeek API keys leaked in 2025 alone.
The Part That Will Surprise You: It's Not Just Code
If your team doesn't write much code, you might think this doesn't apply to you. It does.
28% of the incidents in this report didn't come from code at all. They came from Slack, Jira, and Confluence.
Think about how your team operates. Someone sets up a new AI integration. They paste the API key into a Slack message to share it with a colleague. It ends up in a Jira ticket. Someone includes it in a Confluence doc titled "How to set up the AI tool." Nobody thinks of these as a security surface. They absolutely are.
The report found that secrets discovered only in collaboration tools were more likely to be critical than those found in code (56.7% vs 43.7%). Slack and Jira tend to contain production credentials — the real keys to real systems.
The Cost of Getting This Wrong
A leaked API key is not just embarrassing. It's expensive. A live OpenAI or cloud provider API key in a public repo can be picked up by automated scrapers within minutes. Attackers run up usage bills, extract data, or sell the key. We've seen this happen to a 14-person nonprofit that shared an Azure key in a Slack channel. The resulting bill was $4,200 before anyone noticed.
That story is common enough that GitGuardian tracked it at scale: 64% of the secrets that leaked in 2022 were still active and unrevoked in January 2026. Most orgs don't have a process to rotate credentials. They set up an API key, it works, and it never gets touched again — until someone gets the bill.
What You Should Do This Week
This is the kind of thing where awareness is step one, but it's not enough on its own. Here's what actually helps:
1. Audit what's already out there. GitGuardian offers a free scan for public repos. Run it on anything your team has published. You may find keys you forgot about years ago.
2. Set a rule: API keys never go in chat. This sounds obvious until you realize how often it happens. Slack is not a password manager. Make it a team norm, then enforce it with a password manager or secrets vault (1Password, Bitwarden, and HashiCorp Vault all work at small-org scale).
3. Rotate your AI service credentials now. If you've been using the same OpenAI, Mistral, or similar API key for more than a few months — and your team uses any kind of shared tooling — rotate it. It takes five minutes and eliminates a whole category of risk.
4. Watch your MCP configs. If your team is setting up AI agents or workflow tools that use MCP (which includes most modern AI coding and automation platforms), treat those config files like .env files. They should be in .gitignore before you write the first line. If you're pulling down tutorials or sample repos, check the config before running anything.
5. Get your CI/CD in the picture. The report found that 59% of compromised machines were CI/CD runners, not developer laptops. If you're using GitHub Actions, CircleCI, or similar — use their native secrets management, not hardcoded values.
What This Means for the Rush to Adopt AI Tools
The pattern we're seeing is this: small orgs are adopting AI tools fast because the productivity gains are real and visible. The security debt they're accumulating in the process is invisible until it isn't.
The GitGuardian data is a snapshot of what's happening when speed wins over hygiene. It's not an argument against using AI tools — we recommend them constantly. It's an argument for building the security habits at the same time, not after.
The good news for a 10-50 person org: this isn't a $200K enterprise security audit problem. The fixes are practical, cheap, and fast. You don't need a policy document. You need thirty minutes with your team and a password manager.
That's the kind of thing we set up during a sprint — getting AI tools in your team's hands and making sure the foundations are right at the same time. If your team is in the middle of an AI rollout and this raised some flags, get in touch.