diff --git a/docs/docs/ai-presets.mdx b/docs/docs/ai-presets.mdx index de117a86c..2d1e34234 100644 --- a/docs/docs/ai-presets.mdx +++ b/docs/docs/ai-presets.mdx @@ -90,6 +90,27 @@ To connect to a local Ollama instance: Note: The `ai:apitoken` is required but can be any value as Ollama ignores it. See [Ollama OpenAI compatibility docs](https://github.com/ollama/ollama/blob/main/docs/openai.md) for more details. +### Docker Model Runner + +If you have Docker Desktop or running Docker CE it can run LLMs the Docker Model Runner. To connect to a Docker Model Runner use a preset similar to the following: + +```json +{ + "ai@docker-gemma": { + "display:name": "Docker Model Runner", + "display:order": 3, + "ai:*": true, + "ai:baseurl": "http://localhost:12434/engines/llama.cpp/v1", + "ai:name": "ai/gemma3n:latest", + "ai:model": "ai/gemma3n:latest", + "ai:apitoken": "not_used" + } +} +``` + +Note: The `ai:apitoken` is required but can be any value as the Docker model runner ignores it. +See [Docker Model Runner docs](https://docs.docker.com/ai/model-runner/) for details how to pull models, configure model parameters like context size, and other details. + ### Azure OpenAI To connect to Azure AI services: diff --git a/docs/docs/faq.mdx b/docs/docs/faq.mdx index 08797f690..6e1fb6d38 100644 --- a/docs/docs/faq.mdx +++ b/docs/docs/faq.mdx @@ -8,7 +8,7 @@ title: "FAQ" ### How do I configure Wave to use different AI models/providers? -Wave supports various AI providers including local LLMs (via Ollama), Azure OpenAI, Anthropic's Claude, and Perplexity. The recommended way to configure these is through AI presets, which let you set up and easily switch between different providers and models. +Wave supports various AI providers including local LLMs (via Ollama or Docker), Azure OpenAI, Anthropic's Claude, and Perplexity. The recommended way to configure these is through AI presets, which let you set up and easily switch between different providers and models. See our [AI Presets documentation](/ai-presets) for detailed setup instructions for each provider.