All Insights

There's Now a Free Firewall for Your AI Agents — Install It Before You Need It

CivSafe Team·May 6, 2026·6 min read

Two days ago, a developer named Joshua Waldrep shipped Pipelock v2.3.0 under his PipeLab project. It's free, open-source, and it solves a problem that's been sitting in plain sight for every team running AI agents: your agents can reach the internet, and nobody's watching what they send.

If you're using Cursor, CrewAI, LangGraph, or any other agentic AI tool right now — this one's for you.

The gap nobody was filling

When a person at a keyboard makes an HTTP request, there's a human moment of judgment before it happens. When an AI agent makes one, there isn't. The agent fetches a URL, calls a tool, follows an instruction — all in milliseconds, without hesitation.

That's usually what you want. Agents are useful because they move fast. But it creates a problem: every API key, database credential, and token your agent has access to is one poisoned instruction away from being sent somewhere it shouldn't go. Not by a hacker who broke in — by the agent itself, doing exactly what it was told.

This is the pattern behind PocketOS wiping their production database last month. It's the attack surface in the CrewAI RCE chain. It's what makes prompt injection so dangerous in agentic workflows: if you can control what the agent reads, you can control what it does — including what it sends and where.

The defensive tools for this have been either enterprise-priced (Lakera, Protect AI), deeply tied to specific platforms, or purely theoretical ("just scope your permissions"). Nothing you could drop in front of any agent in five minutes, for free, and actually trust.

Until this week.

What Pipelock does

Pipelock is an egress proxy. It sits between your AI agent and the network, and every HTTP, WebSocket, and MCP request your agent makes has to pass through it first.

The key architectural decision — and it's a smart one — is that Pipelock lives outside the agent's trust boundary. That matters. An agent can be instructed to ignore its own guardrails. It can't be instructed to ignore a proxy it doesn't control. The scan happens whether the agent cooperates or not.

Outbound requests run through an 11-layer scanner. It checks for 48 credential patterns (API keys, tokens, bearer headers, crypto mnemonics), blocks SSRF attempts and DNS rebinding, analyzes URL path and subdomain entropy for tunneling behaviour, and enforces per-domain rate limits and data budgets. All of this happens before DNS queries even leave the proxy — roughly 40 microseconds of overhead per URL.

Inbound responses get scanned too, and this is often the part teams miss. Prompt injection doesn't always arrive in user input. A malicious webpage, a poisoned tool result, a compromised external API — any of these can deliver instructions that redirect an agent's behaviour mid-task. Pipelock runs 25 injection-detection patterns across responses, with six-pass text normalization that catches zero-width characters, leetspeak, and base64 payloads that evade naive pattern matching.

MCP traffic gets bidirectional scanning. If your setup uses Model Context Protocol servers — increasingly common as teams connect agents to internal tools like Notion, Linear, or custom databases — Pipelock scans both the tool call arguments going out and the tool results coming back. It recognizes tool poisoning patterns and tool chain attack sequences specifically.

v2.3.0, which shipped Sunday, added two things that make it genuinely practical for real workflows. First: class-preserving request redaction, which rewrites detected credentials in request bodies before egress rather than just blocking the request outright. Second: generic SSE streaming response scanning, meaning you don't lose your real-time streaming interface while inspection is running.

What deployment looks like in practice

For a small team, it's two commands:

brew install luckyPipewrench/tap/pipelock
pipelock init

Docker is available if you prefer containers. There are pre-built configuration presets for the agent tools most teams are already using — including configs for Cursor, VS Code, CrewAI, LangGraph, and OpenAI Agents SDK — so you're not starting from scratch on allowlists.

The default mode ("balanced") warns and logs without blocking. Run it for a day or two before committing to enforcement. "Strict" mode locks down all egress to an allowlist you control. "Audit" logs everything for review without taking any action — useful if you just want to understand your current exposure before deciding anything.

The project ships with compliance documentation for OWASP Agentic Top 15, OWASP MCP Top 10, NIST 800-53, and EU AI Act mappings. If you work in a regulated sector — healthcare, public sector, financial services — those mappings matter more than they might seem when someone asks how you're managing agentic AI risk.

Core is Apache 2.0, fully free. Enterprise features (per-agent identity, multi-agent budget isolation, config isolation between teams) are behind an Elastic License 2.0 commercial option. For a team of 3-15 people, the free tier covers everything meaningful.

547 GitHub stars and growing as of this morning.

The honest limitation

Balanced mode catches naive exfiltration — a credential appearing literally in a URL or request body gets flagged. A sophisticated attacker who's already compromised your agent and knows Pipelock is present could potentially work around it: chunked exfiltration across many small requests, for example, is harder to catch than a single direct dump.

Strict mode closes most of those gaps by blocking all non-allowlisted domains. The tradeoff is maintaining that allowlist, which adds operational friction. For most small teams, balanced is the practical starting point; strict mode for any agent that touches production secrets or handles financial data.

This doesn't replace credential scoping, auditing what tools your agents can call, or thinking carefully about least-privilege before you wire up an agent to anything important. It's an additional enforcement layer, not a substitute for the basics.

What to do this week

If your team has any kind of agent workflow running — even Cursor with database access, a CrewAI pipeline touching your CRM, or a LangGraph flow reading your email — spend twenty minutes this week:

  1. Install Pipelock in audit mode
  2. Let it run alongside your normal workflow for a day
  3. Review the logs

You will almost certainly see credentials and sensitive patterns showing up in places you didn't expect. That's not a Pipelock problem — it's information you didn't have before. What you do with it is up to you.

We help teams map their agent setups: which tools have access to what, where the actual exposure is, and which controls fit the stack you're running. If you're moving fast with agents and haven't done that audit yet, doing it before an incident is a lot cheaper than doing it after.

github.com/luckyPipewrench/pipelock

CivSafe — Strategic Innovation. Community Impact.