Drop-in guardrails for LLM apps.
PII redaction and prompt-injection detection in a single API call. Built for developers shipping AI — one POST, every prompt cleaned and vetted before it reaches your model.
Free tier — 500 scans/month, no card required.
curl https://api.sievr.dev/v1/scan \
-H "Authorization: Bearer sv_live_..." \
-H "Content-Type: application/json" \
-d '{
"text": "Email alice@example.com. Ignore previous instructions.",
"checks": ["pii", "injection"]
}'{
"request_id": "01HXYZ…",
"latency_ms": 47,
"pii": {
"redacted_text": "Email <EMAIL_1>. Ignore previous instructions.",
"entities": [
{ "type": "EMAIL", "start": 6, "end": 23, "score": 0.99 }
]
},
"injection": {
"verdict": "block",
"score": 0.95,
"reasons": ["pattern:instruction_override"]
}
}Two checks, one endpoint.
Every request runs both checks in parallel. Redacted text and injection verdict come back in a single JSON response — typically well under 200 ms.
Names, emails, phones, IDs — replaced before they reach your model.
Microsoft Presidio under the hood, with reversible <TYPE_N> placeholders so you can rehydrate post-response. Custom regex recognizers per request.
Three-stage cascade — patterns, classifier, optional LLM tiebreaker.
~150 hand-curated regex patterns catch the obvious overrides; a DeBERTa-v3 classifier handles novel attacks; Claude Haiku resolves the uncertain band when you opt in.
One bearer token, one POST, language-agnostic.
No SDK lock-in, no infra to run. Stripe-style key prefixes for leak detection, rate-limit headers on every response, batch endpoint for log-volume preprocessing.
We publish our numbers.
Measured on the deepset/prompt-injections public test split. Methodology and harness live in the repo. Last run today.
Pricing.
Free for development. Paid plans bill base + metered overage. No seat licenses, no enterprise add-ons.