Launching Veil AI Firewall
Veil started as one job done well: keep sensitive data out of upstream LLMs. That still matters, and it is still built into every request. But the real problem teams are running into now is bigger than PII.
Prompt injection attacks are showing up in agent workflows. Tool descriptions can poison model behavior. Outputs can leak secrets or internal prompts. MCP servers are quickly becoming part of the application attack surface.
So Veil is now Veil AI Firewall: one API to secure prompts, responses, and tool calls.
What ships today
- Input-side PII redaction with the existing proxy flow
- Prompt injection detection on inline chat traffic and standalone inspection endpoints
- Output filtering for prompt leakage, secret leakage, unsafe links, and unsafe tool arguments
- Hallucination flags for unsupported new numbers, dates, and named entities
- MCP inspection endpoints for server descriptors, tool calls, and tool results
What did not change
- Same domain:
https://veil-api.com - Same OpenAI-compatible proxy shape
- Same API keys, same Stripe setup, same pricing tiers
- Same redaction and response restoration flow for production payloads
The new API surface
POST /v1/chat/completions x-veil-input-policy: off|monitor|block x-veil-output-policy: off|monitor|block x-veil-hallucination-flags: off|on POST /v1/firewall/input POST /v1/firewall/output POST /v1/firewall/mcp
The idea is simple: if you already route model calls through Veil, you can turn on runtime AI security without changing vendors, auth, or billing.
Why MCP matters
MCP is making tool access a standard part of the model stack. That is useful, but it also creates a new place for hidden instructions, scope confusion, and data exfiltration to enter the system. We think MCP inspection will become a default requirement for agentic apps.
Try Veil AI Firewall
Start on the free tier, point a staging client at Veil, and turn on input or output policies one header at a time.
Get Free API Key