Shield SDKProvider Wrappers
OpenAI Provider
Wrap your OpenAI client with Shield protection for automatic prompt hardening, injection detection, and output sanitization.
OpenAI Provider
The shieldOpenAI wrapper adds transparent security to your existing OpenAI client. It intercepts every chat.completions.create call to harden system prompts, detect injections in user messages, and sanitize leaked content from responses.
Usage
import OpenAI from "openai";
import { shieldOpenAI } from "@zeroleaks/shield/openai";
const client = shieldOpenAI(new OpenAI(), {
systemPrompt: "You are a financial advisor...",
onDetection: "block",
});
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [
{ role: "system", content: "You are a financial advisor..." },
{ role: "user", content: userInput },
],
});How It Works
On every call to chat.completions.create, Shield:
- Clones the messages array (never mutates your original objects)
- Hardens any
systemmessage with security rules (unlessharden: false) - Scans every
usermessage for injection patterns (unlessdetect: false) - Calls the original OpenAI API
- Sanitizes the response text for leaked system prompt fragments (unless
sanitize: false)
Options
| Option | Type | Default | Description |
|---|---|---|---|
systemPrompt | string | � | The system prompt to protect (used for output sanitization) |
harden | HardenOptions | false | {} | Hardening options, or false to disable |
detect | DetectOptions | false | {} | Detection options, or false to disable |
sanitize | SanitizeOptions | false | {} | Sanitization options, or false to disable |
streamingSanitize | "buffer" | "chunked" | "passthrough" | "buffer" | "buffer": full buffer then sanitize. "chunked": 8KB chunks, lower memory. "passthrough": skip sanitization. |
streamingChunkSize | number | 8192 | Chunk size for "chunked" mode |
throwOnLeak | boolean | false | When true, throw LeakDetectedError instead of redacting leaked content |
onDetection | "block" | "warn" | "block" | "block" throws an error, "warn" calls the callback only |
onInjectionDetected | (result) => void | � | Callback when injection is detected |
onLeakDetected | (result) => void | � | Callback when output leak is detected |
Blocking vs Warning
By default, onDetection is "block" -- if an injection is detected, Shield throws an Error with details about the risk level and matched categories. To log instead of blocking:
const client = shieldOpenAI(new OpenAI(), {
onDetection: "warn",
onInjectionDetected: (result) => {
console.warn(`Injection detected: ${result.risk}`, result.matches);
},
});