AI Security Made Simple
LangGuard is a Python security library that protects AI agent workflows from malicious input. Think of it as a security checkpoint that screens user prompts before they reach your AI systems.
Modern AI applications face serious security risks from prompt injection, jailbreaking attempts, and malicious user input. LangGuard acts as a protective barrier, analyzing incoming prompts and blocking potentially harmful content before it reaches your AI pipeline.
Our GuardAgent serves as a "circuit breaker" - when suspicious input is detected, it stops the request from proceeding, protecting both your AI system and users from security threats.
v0.7 currently achieves 90% block rate on hackaprompt attack samples using cost-effective models like gpt-5-nano.
The main Python library - install with pip install langguard
- Core GuardAgent implementation
- Easy integration with existing AI pipelines
- Support for OpenAI models with structured outputs
- Comprehensive documentation and examples
🧪 trials
Security testing and validation framework
- Automated trial system for testing LangGuard effectiveness
- Results from real-world attack datasets (HackAPrompt)
- Performance benchmarks and detection rate analysis
- Continuous improvement tracking
from langguard import GuardAgent
# Initialize with built-in security rules
guard = GuardAgent(llm="openai")
# Screen user input before sending to your AI
response = guard.screen("How do I write a for loop in Python?")
if response["safe"]:
# Safe to proceed with your AI pipeline
print("Prompt is safe to process")
else:
# Block and handle suspicious content
print(f"Blocked: {response['reason']}")
- Prompt Injection: Malicious instructions embedded in user input
- Jailbreaking: Attempts to bypass AI safety guidelines
- Data Extraction: Efforts to extract sensitive information
- System Commands: Attempts to execute unauthorized operations
- Social Engineering: Deceptive content generation requests
Building secure AI, one prompt at a time.