Something shifted quietly in Washington this week. Not loudly. No big speeches. But the infrastructure for mandatory AI pre-approval just got a lot more real — and if you're a small NGO, a public-sector team, or any organization that uses AI in work touching government funding or contracts, you need to understand what just changed.
Here's What Actually Happened
On May 5, the Center for AI Standards and Innovation (CAISI), housed inside NIST, announced new testing agreements with Google DeepMind, Microsoft, and xAI. Those three join OpenAI and Anthropic, who signed similar agreements back in 2024. All five of the dominant frontier AI labs are now inside a formal US government pre-deployment review program.
Then, on May 6, White House National Economic Council Director Kevin Hassett told reporters that the administration is actively studying an executive order that would require AI models to go through a vetting process — and he compared it directly to FDA drug approval. Before public release. Mandatory.
The White House walked it back within 48 hours. Voluntary for now, said the Chief of Staff. Still studying it.
But the infrastructure is already there. And "voluntary today" has a way of becoming "mandatory next year" once the political conditions are right. We've seen this movie before.
What CAISI Actually Does
This isn't a rubber-stamp review board. CAISI tests frontier AI models with their safety guardrails removed. That means they're probing for what these systems can actually do when you take the filters off — biosecurity risks, cyberattack capabilities, military application scenarios.
Multiple federal agencies participate through the TRAINS Taskforce: Defense, Energy, Homeland Security, and others. CAISI has now completed more than 40 evaluations, including of models that have never been released to the public. They're running red-team exercises on unreleased AI systems so the government understands the threat landscape before it hits the market.
This isn't hype. This is real government infrastructure, quietly built over the past 18 months, that now covers every major player at the frontier.
The Two-Tier System That's Forming
Here's the structural problem for smaller organizations: these five companies — Google DeepMind, Microsoft, xAI, OpenAI, and Anthropic — now have a privileged regulatory relationship with the US government. Their models are known. Their capabilities are documented. Their executives have signed agreements with federal agencies.
Everyone else's models are not.
If this becomes mandatory — and the political trajectory is pointing that way — only models that have cleared CAISI review would be certified for use in federally adjacent contexts. Think about what that covers: any NGO receiving federal grants, any service provider working with public sector clients, any organization processing government data.
That's a massive regulatory moat. And it runs directly in favor of the five companies that can afford the compliance infrastructure to maintain ongoing federal relationships.
Small AI startups? Open-source projects? Academic labs? They're not in that room.
The Open-Source Wild Card
Here's where it gets complicated. CAISI has already evaluated at least one open-source model — DeepSeek V4 Pro — which suggests the program isn't exclusively closed-source. But open-source models released by smaller teams, community projects, or foreign labs don't go through pre-deployment review by design. They're released publicly, full stop.
For now, that's fine. You can run Mistral, Qwen, or Gemma locally today with no regulatory questions.
But if mandatory pre-approval becomes policy, using an unreviewed model in government-adjacent work could get murky fast. Procurement officers — already gun-shy about AI — will gravitate toward the path of least legal risk. That path is the five companies with CAISI agreements.
The irony: the open-source models that are often cheaper, more private, and better suited for sensitive local deployment could get pushed out of regulated contexts not because they're worse — but because they never had the federal access to prove they're safe.
The Canadian Angle
We work mostly with Canadian organizations, so let's be direct: NIST frameworks don't stop at the 49th parallel. Canada's AI governance approach draws heavily from US standards, and federal procurement in Ottawa already mirrors US vendor risk frameworks more than most people realize.
If CAISI pre-approval becomes a meaningful US regulatory expectation, Canadian public sector procurement will start reflecting that pressure within 12–18 months. Treasury Board guidance on AI procurement is already pointing toward vendor accountability frameworks. It won't be long before "is your vendor part of a recognized government AI evaluation program?" becomes a standard due diligence question in RFPs.
What To Do Right Now
Don't panic. But do pay attention, because the moves you make on AI tool selection in the next six months will matter.
Audit your AI stack today. List every AI tool your team uses. Note which vendor provides it. Check whether that vendor is a major player with an established government relationship or a smaller provider that may face regulatory uncertainty.
Understand what your work touches. If your organization receives federal or provincial funding, or delivers services to government clients, you're going to face more scrutiny on AI tools than a pure private-sector SMB. Get ahead of that.
Don't consolidate onto a single vendor right now. The worst position to be in when regulations shift is complete dependency on one API. Diversify — even if that just means maintaining the capability to switch.
Consider local AI for sensitive work. Running an open-source model on your own hardware bypasses API dependency entirely and sidesteps a lot of the regulatory uncertainty. For organizations handling sensitive government data, this is worth a serious conversation now rather than later.
Ask your vendors direct questions. "What's your regulatory compliance posture? Are you participating in any government AI evaluation programs?" If they can't answer clearly, that's signal.
The regulatory landscape for AI is moving faster than most small organizations realize. The companies who spend the next few months understanding their exposure and building flexibility into their AI stack will be the ones who don't get caught flat-footed when the next executive order drops.
We spend a lot of time helping small teams think through exactly this kind of thing — not in theory, but in practice. Which tools to use, how to structure workflows to stay flexible, and where the real risks actually live. If this is keeping you up at night, let's talk.