The Safety layer for your AI Apps

Modern content moderation, PII detection, and safety toolkit for developers. Filter toxic content, protect user privacy, and ensure compliance in minutes.

npm install safeguard-ai
Copied!
const SafeguardAI = require('safeguard-ai');

// Initialize the shield
const moderator = new SafeguardAI({
  apiKey: process.env.OPENAI_API_KEY,
  redactPII: true
});

// Protect your application
const result = await moderator.checkText("Email me at john@example.com");

if (!result.safe) {
  console.log('Blocked:', result.categories);
} else {
  // Redacted: "Email me at [REDACTED_EMAIL]"
  processMessage(result.cleanText);
}

AI Moderation

Advanced detection of toxicity, hate speech, violence, and harassment using context-aware LLM providers.

PII Detection

Automatic detection and redaction of emails, phones, SSN, and credit cards to keep your data compliant.

Custom Rules

Enforce brand safety with custom word blocklists and regex patterns tailored to your business needs.

Hybrid Speed

Fast local pattern matching combined with precise cloud AI for the perfect balance of speed and safety.

Multi-Provider

Provider-agnostic design. Switch between OpenAI, Azure, or Perspective API with zero friction.

Compliance Ready

Simplifies GDPR, HIPAA, and PCI-DSS compliance requirements for AI-generated content.

Explore Documentation