Skip to content
@langguard

LangGuard

LangGuard 🛡️

AI Security Made Simple

LangGuard is a Python security library that protects AI agent workflows from malicious input. Think of it as a security checkpoint that screens user prompts before they reach your AI systems.

🎯 What We Do

Modern AI applications face serious security risks from prompt injection, jailbreaking attempts, and malicious user input. LangGuard acts as a protective barrier, analyzing incoming prompts and blocking potentially harmful content before it reaches your AI pipeline.

Our GuardAgent serves as a "circuit breaker" - when suspicious input is detected, it stops the request from proceeding, protecting both your AI system and users from security threats.

📊 Performance

v0.7 currently achieves 90% block rate on hackaprompt attack samples using cost-effective models like gpt-5-nano.

🔗 Key Repositories

The main Python library - install with pip install langguard

  • Core GuardAgent implementation
  • Easy integration with existing AI pipelines
  • Support for OpenAI models with structured outputs
  • Comprehensive documentation and examples

🧪 trials

Security testing and validation framework

  • Automated trial system for testing LangGuard effectiveness
  • Results from real-world attack datasets (HackAPrompt)
  • Performance benchmarks and detection rate analysis
  • Continuous improvement tracking

🚀 Quick Start

from langguard import GuardAgent

# Initialize with built-in security rules
guard = GuardAgent(llm="openai")

# Screen user input before sending to your AI
response = guard.screen("How do I write a for loop in Python?")

if response["safe"]:
    # Safe to proceed with your AI pipeline
    print("Prompt is safe to process")
else:
    # Block and handle suspicious content
    print(f"Blocked: {response['reason']}")

🛡️ Protection Against

  • Prompt Injection: Malicious instructions embedded in user input
  • Jailbreaking: Attempts to bypass AI safety guidelines
  • Data Extraction: Efforts to extract sensitive information
  • System Commands: Attempts to execute unauthorized operations
  • Social Engineering: Deceptive content generation requests

Building secure AI, one prompt at a time.

Popular repositories Loading

  1. langguard-python langguard-python Public

    LangGuard Python Library

    Python

  2. langguard-js langguard-js Public

    LangGuard JS library -- (Planned not implemented)

  3. trials trials Public

    Trials to test the security benefits of LangGuard

    Python

  4. .github .github Public

Repositories

Showing 4 of 4 repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…