Modern content moderation, PII detection, and safety toolkit for developers. Filter toxic content, protect user privacy, and ensure compliance in minutes.
const SafeguardAI = require('safeguard-ai'); // Initialize the shield const moderator = new SafeguardAI({ apiKey: process.env.OPENAI_API_KEY, redactPII: true }); // Protect your application const result = await moderator.checkText("Email me at john@example.com"); if (!result.safe) { console.log('Blocked:', result.categories); } else { // Redacted: "Email me at [REDACTED_EMAIL]" processMessage(result.cleanText); }
Advanced detection of toxicity, hate speech, violence, and harassment using context-aware LLM providers.
Automatic detection and redaction of emails, phones, SSN, and credit cards to keep your data compliant.
Enforce brand safety with custom word blocklists and regex patterns tailored to your business needs.
Fast local pattern matching combined with precise cloud AI for the perfect balance of speed and safety.
Provider-agnostic design. Switch between OpenAI, Azure, or Perspective API with zero friction.
Simplifies GDPR, HIPAA, and PCI-DSS compliance requirements for AI-generated content.