Launching Veil AI Firewall

April 16, 2026 ยท 4 min read

Veil started as one job done well: keep sensitive data out of upstream LLMs. That still matters, and it is still built into every request. But the real problem teams are running into now is bigger than PII.

Prompt injection attacks are showing up in agent workflows. Tool descriptions can poison model behavior. Outputs can leak secrets or internal prompts. MCP servers are quickly becoming part of the application attack surface.

So Veil is now Veil AI Firewall: one API to secure prompts, responses, and tool calls.

What ships today

What did not change

The new API surface

POST /v1/chat/completions
x-veil-input-policy: off|monitor|block
x-veil-output-policy: off|monitor|block
x-veil-hallucination-flags: off|on

POST /v1/firewall/input
POST /v1/firewall/output
POST /v1/firewall/mcp

The idea is simple: if you already route model calls through Veil, you can turn on runtime AI security without changing vendors, auth, or billing.

Why MCP matters

MCP is making tool access a standard part of the model stack. That is useful, but it also creates a new place for hidden instructions, scope confusion, and data exfiltration to enter the system. We think MCP inspection will become a default requirement for agentic apps.

Try Veil AI Firewall

Start on the free tier, point a staging client at Veil, and turn on input or output policies one header at a time.

Get Free API Key