A modern AI assistant that integrates with EGroupware to provide a conversational interface for accessing and managing EGroupware functionalities.
This project implements an AI-powered chatbot system for EGroupware that allows users to interact with their EGroupware data using natural language. The system consists of two main components:
- Agent Service: Handles the conversational interface, LLM interactions, authentication, and user interface
- Tool Server: Connects to EGroupware APIs and provides tools for the agent to perform actions in EGroupware
The system is designed as a microservice architecture with Docker containerization:
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ │ │ │ │ │
│ Web Frontend │◄────┤ Agent Service │◄────┤ Tool Server │
│ (Browser) │ │ (FastAPI) │ │ (FastAPI) │
│ │ │ │ │ │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│
▼
┌─────────────────┐
│ │
│ EGroupware │
│ │
└─────────────────┘
- Conversational AI Interface: Natural language processing for EGroupware interactions
- Authentication: Secure login with EGroupware credentials
- EGroupware Integration: Access to key EGroupware modules:
- Addressbook management
- Calendar events and appointments
- InfoLog (tasks and notes)
- Email communications
- Knowledge base access
- Multi-model LLM Support: Configurable to work with various LLM providers
- Containerized Deployment: Easy setup with Docker Compose
- Docker and Docker Compose
- EGroupware instance with API access
- LLM API keys (OpenAI, Azure, Anthropic, etc.)
- Create a
.envfile in the project root with the following variables:
# .env file for EGroupware Chatbot
# Tool Server Configuration
TOOL_SERVER_URL=http://tool-server:8001
# Security (change these values!)
JWT_SECRET=your_jwt_secret_here
# Build and start the containers
docker-compose up -d
# Check logs if needed
docker-compose logs -fThe application will be available at https://localhost (HTTPS via nginx). The old direct http://localhost:8000 endpoint is now internal only.
If ports 80/443 are already in use on the host, the compose file remaps nginx to 8080/8443 (access: https://localhost:8443). Adjust the ports: section of the nginx service if you free 80/443 and want the canonical ports again.
- Navigate to https://localhost in your web browser (accept the self-signed certificate on first visit)
- Log in with your EGroupware credentials
- Start interacting with the chatbot by typing natural language queries such as:
- "Show my upcoming meetings for next week"
- "Create a new contact for John Doe with email [email protected]"
- "Find documents about project planning in the knowledge base"
-
agent_service/: The main service that handles the LLM interface and user interactionsmain.py: FastAPI application entry pointauth.py: Authentication handlingllm_service.py: LLM provider integrationsprompts.py: System prompts for the LLMschemas.py: Data models
-
tool_server/: The service that connects to EGroupwaretools/: Individual tool implementationsaddressbook.py: Contact management functionscalendar.py: Calendar event functionsinfolog.py: Tasks and notes functionsmail.py: Email functionsknowledge.py: Knowledge base search functions
knowledge/: Local knowledge base files
-
static/: Web frontend filesindex.html: Main chat interfacelogin.html: Login pagescript.js: Frontend JavaScriptstyle.css: Styling
-
docker-compose.yml: Docker Compose configuration -
agent.Dockerfile: Dockerfile for the agent service -
tool.Dockerfile: Dockerfile for the tool server -
requirements.txt: Python dependencies
- Create a Python virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt- Run the services separately (HTTP only, for local dev without nginx TLS):
# Terminal 1 - Tool Server
uvicorn tool_server.main:app --reload --port 8001
# Terminal 2 - Agent Service
uvicorn agent_service.main:app --reload --port 8000Production (and the default Docker Compose) uses an nginx reverse proxy that:
- Terminates TLS on port 443 using the certificate in
ssl/cert.pemand keyssl/key.pem - Redirects all HTTP (port 80) to HTTPS
- Proxies traffic to the internal FastAPI services (
agent-serviceon 8000,tool-serveron 8001) - Optionally exposes Tool Server docs at
https://localhost/tools/docs(FastAPI docs) if enabled in config
Replace the provided self-signed certs with real certificates for production. You can generate a new self-signed pair for testing:
openssl req -x509 -nodes -newkey rsa:4096 \
-keyout ssl/key.pem -out ssl/cert.pem -days 365 \
-subj "/C=DE/ST=NRW/L=Cologne/O=EGroupware/OU=AI/CN=localhost"If you want automated Let's Encrypt certificates, add a companion container like nginx-proxy + acme-companion or use certbot on the host and mount the live certs into the nginx service.
After each assistant response the UI now fetches up to 4 AI-generated "quick reply" buttons (endpoint: GET /suggestions?token=...&count=4). Clicking a button sends that suggestion as the next user message. If no history exists, starter suggestions are shown. The endpoint:
- Trims recent conversation (last ~6 turns)
- Prompts the model to return only a JSON array
- Falls back to safe defaults if parsing fails or model errors
To change number of buttons, adjust the count query param (1–6) or modify the initial fetch in static/script.js.
The chat UI includes an optional microphone button that lets users dictate a message. When you click it, the browser records a short clip (WebM/Opus) and uploads it to the /transcribe endpoint. The server sends the audio to OpenAI Whisper (whisper-1) and returns transcribed text inserted into the message box for editing or immediate sending.
Current limitations:
- Only works when the selected provider is OpenAI (other providers will disable the mic button).
- Not enabled for Ionos / Anthropic / Azure in this version.
- Short form dictation (designed for < 45s). Longer recordings may fail or be slow.
- Audio is captured client-side; no local persistence.
Security / privacy notes:
- The raw JWT token is sent as
tokenform field with the audio; ensure HTTPS is enforced (already handled by nginx). - Replace the self-signed certificate in production to avoid MITM risks.
Extending voice:
- Add multi-provider support by integrating alternative speech APIs (Azure Speech, Deepgram, etc.).
- Add streaming partial transcripts by switching to a WebSocket or streaming upload and incremental UI updates.
- Add language auto-detection (specify
languagein the Whisper call).
Ensure your .env uses the internal service name for the tool server now that HTTPS sits in front:
TOOL_SERVER_URL=http://tool-server:8001
Frontend/browser requests go to https://localhost, but the agent container still calls the tool server over the internal Docker network (HTTP is fine internally).