An interactive demo of simple prompt injection protection for LLM chatbots. Try it live at: https://bugzap.vercel.app
This project demonstrates basic and effective techniques for preventing prompt injection attacks in LLM-powered applications. Built with Next.js and Vercel's AI SDK, it lets you experiment with prompt injection attempts and see how simple protections work in real time.
This repository demonstrates effective techniques for preventing prompt injection attacks in LLM-powered applications. Built with Next.js and Vercel's AI SDK, it showcases security best practices for AI chat applications including input validation, output sanitization, and safe prompt engineering.
- 🛡️ Prompt Injection Prevention: Demonstrates simple protection against malicious prompts
- � Input Sanitization: Implements proper validation and filtering of user inputs
- ⚡ Vercel AI SDK Integration: Shows secure implementation patterns with modern AI frameworks
- 🎯 System Prompt Protection: Techniques to prevent system prompt extraction and manipulation
- 💬 Safe Chat Interface: Secure chat implementation with proper context handling
- 📱 Mobile Responsive: Works seamlessly on all devices
- Frontend: Next.js 15, React 19, TypeScript, Tailwind CSS
- AI/ML: Groq API with DeepSeek R1 Distill model, Vercel AI SDK
- Security: Input validation, output sanitization, prompt isolation
- UI Components: Radix UI, Lucide React icons
- Deployment: Vercel
- Groq API Key: For the chat functionality
- Node.js: Version 18 or higher
Create a .env.local
file in the root directory:
# Groq Configuration
GROQ_API_KEY=your_groq_api_key_here
-
Clone the repository:
git clone <repository-url> cd prompt-injection-prevention-next
-
Install dependencies:
nvm use npm install
-
Run the development server:
npm run dev
-
Open your browser and navigate to
http://localhost:3000
This project showcases several prompt injection prevention techniques:
- Isolated system prompts that are not directly exposed to user input
- Clear separation between system instructions and user messages
- Proper sanitization of user inputs before processing
- Length limits and content filtering
- Controlled response generation with predefined boundaries
- Prevention of system information leakage
- Safe handling of conversation context
- Prevention of context manipulation attacks
You can test the application's resistance to common prompt injection techniques:
- System Prompt Extraction: Try asking the bot to reveal its system instructions
- Role Manipulation: Attempt to change the bot's role or behavior
- Context Pollution: Try to inject malicious context into the conversation
- Output Manipulation: Attempt to control the format or content of responses
The application demonstrates how proper implementation can resist these attacks while maintaining functionality.
By exploring this codebase, you'll learn:
- How to implement secure AI chat applications
- Best practices for prompt engineering in production
- Techniques for input validation and output control
- Methods to prevent common LLM vulnerabilities
- Integration patterns with Vercel AI SDK