The AI assistant with a persistent, self-organizing brain.
Built in Go for high performance, privacy, and true long-term memory.
Assistant-Go is not just another chatbot. It's a stateful AI companion engineered to solve the fundamental problem of digital amnesia in modern AI. It features a sophisticated Cognitive Memory Architecture that allows it to learn from conversations, build a dynamic knowledge graph, and autonomously maintain its memory to ensure it remains coherent and relevant over time.
Forget the limitations of context windows. Assistant-Go builds its understanding in a persistent PostgreSQL database, using pgvector
for semantic search. It doesn't just store what you say; it understands, consolidates, and reasons about information, proactively resolving conflicts and managing its own knowledge lifecycle.
-
Cognitive Memory System: The heart of the assistant. A complete knowledge lifecycle management system that goes far beyond simple conversation history.
- Intelligent Fact Extraction: Uses LLMs to automatically parse conversations and structure them into typed memories (facts, preferences, schedules, etc.).
- Tiered Semantic Deduplication: A key innovation. Instead of a binary check, it uses a graded approach to similarity, preventing data loss while eliminating redundancy:
> 0.98 Similarity (Identical)
: Redundant memory is deactivated.0.90 - 0.98 Similarity (Highly Similar)
: Memories are intelligently merged, combining their metadata.0.80 - 0.90 Similarity (Related)
: Asimilar_to
relationship is created in the knowledge graph, preserving both distinct memories.
- Autonomous Knowledge Curation: A background
Consolidator
service constantly works to keep the knowledge base healthy by merging related concepts, archiving old information, and resolving contradictions. - Dynamic Knowledge Graph: Automatically builds a rich, queryable graph of relationships between memories based on shared entities, topics, and semantic similarity.
- Graph-Aware Retrieval: The search mechanism traverses the knowledge graph to pull in not just direct matches, but also contextually relevant information, providing the LLM with a far richer context.
- Resilient & Asynchronous: All memory operations are handled in a robust background queue, ensuring that assistant responses are always fast and no information is ever lost.
-
Extensible Tool System:
- Built-in Tools: Comes with essential tools for file operations, web search, and time utilities.
- MCP Integration: Seamlessly connect external tools via the Model Context Protocol (MCP) without changing a single line of code.
- Natural Language Access: All tools are available through natural conversation.
-
Multi-Provider AI & Local-First Support:
- Claude (Anthropic), Gemini (Google): Support for top-tier models with automatic failover.
- Ollama: Full support for running local models, ensuring 100% privacy and offline capability.
-
Modern CLI Experience:
- Interactive Chat: A beautiful and intuitive terminal UI powered by Bubble Tea.
- Guided Setup: A first-run wizard makes configuration and personalization effortless.
- Multi-language: Native support for English, Traditional Chinese, and Japanese.
The easiest way to get started is with Docker Compose, which handles all dependencies (PostgreSQL, pgvector, SearXNG for web search) automatically.
Prerequisites: Docker and Docker Compose.
-
Clone the repository:
git clone https://github.com/koopa0/assistant-go.git cd assistant-go
-
Run the setup script:
./chat.sh # On macOS/Linux ./chat.bat # On Windows
That's it! The script will:
- Create your
assistant.yaml
configuration file. - Prompt you to add your AI provider API keys if they are missing.
- Start all required services in the background (
docker-compose up -d
). - Connect you directly to the assistant's chat interface.
- Your PostgreSQL data (memories, conversations) is stored in a Docker volume named
postgres-data
and will persist across container restarts. - To backup your data:
docker-compose exec postgres pg_dump -U postgres assistant > backup.sql
- To restore:
docker-compose exec -T postgres psql -U postgres assistant < backup.sql
To understand what makes Assistant-Go special, consider this scenario:
-
Initial Input: You tell the assistant,
"My favorite coffee is a flat white."
- Extraction: The system identifies this as a
preference
memory. - Storage: A new memory is created with its vector embedding.
- Extraction: The system identifies this as a
-
Second Input: A week later, you mention,
"I love drinking a good flat white in the morning."
- Extraction: Another
preference
memory is created.
- Extraction: Another
-
Autonomous Consolidation: In the background, the
Consolidator
runs.- Detection: It finds these two memories are semantically similar with a score of
~0.92
. - Action: Because the score is in the "merge" threshold, it combines them into a single, more detailed memory:
"User's favorite coffee is a flat white, which they enjoy in the morning."
The original, less-detailed memories are deactivated.
- Detection: It finds these two memories are semantically similar with a score of
-
Retrieval: You ask,
"What's my usual coffee order?"
- Search: The system performs a vector search for your query and finds the consolidated memory with a high relevance score.
- Response: The assistant confidently answers, "Your usual coffee order is a flat white, which you seem to enjoy in the morning."
This continuous cycle of extraction, consolidation, and retrieval allows the assistant to build a robust and nuanced understanding over time.
The project follows a clean, modular architecture designed for maintainability and separation of concerns.
assistant-go/
├── cmd/ # Application entry points (main.go)
├── internal/
│ ├── ai/ # AI provider clients (Claude, Gemini, Ollama)
│ ├── assistant/ # Core assistant orchestrator and chat logic
│ ├── cli/ # Terminal UI (Bubble Tea) and command-line setup
│ ├── conversation/ # Conversation history management
│ ├── memory/ # 🧠 The core Cognitive Memory Architecture
│ ├── mcp/ # Model Context Protocol for external tools
│ ├── platform/ # Shared utilities (config, logger, shutdown handling)
│ ├── prompt/ # Prompt templating and management
│ ├── storage/ # Database layer (PostgreSQL, pgvector, sqlc)
│ └── tool/ # Built-in tool implementations and manager
└── configs/ # Configuration files (sqlc.yaml, etc.)
Contributions are highly welcome! This project uses a comprehensive Makefile
to streamline development.
- Fork the repository.
- Create your feature branch (
git checkout -b feature/AmazingFeature
). - Commit your changes (
git commit -m 'Add some AmazingFeature'
). - Run checks:
make ci
(includes formatting, linting, and testing). - Push to the branch (
git push origin feature/AmazingFeature
). - Open a Pull Request.
Please see CONTRIBUTING.md
for more detailed guidelines.
This project is licensed under the MIT License - see the LICENSE file for details.