From 652a5eeb4c425bfce85646b91107ff583a85c5db Mon Sep 17 00:00:00 2001 From: shelajev Date: Sun, 6 Jul 2025 22:50:09 +0300 Subject: [PATCH] add docs how to configure Docker Model Runner as the AI backend --- docs/docs/ai-presets.mdx | 21 +++++++++++++++++++++ docs/docs/faq.mdx | 2 +- 2 files changed, 22 insertions(+), 1 deletion(-) diff --git a/docs/docs/ai-presets.mdx b/docs/docs/ai-presets.mdx index de117a86c9..2d1e342346 100644 --- a/docs/docs/ai-presets.mdx +++ b/docs/docs/ai-presets.mdx @@ -90,6 +90,27 @@ To connect to a local Ollama instance: Note: The `ai:apitoken` is required but can be any value as Ollama ignores it. See [Ollama OpenAI compatibility docs](https://github.com/ollama/ollama/blob/main/docs/openai.md) for more details. +### Docker Model Runner + +If you have Docker Desktop or running Docker CE it can run LLMs the Docker Model Runner. To connect to a Docker Model Runner use a preset similar to the following: + +```json +{ + "ai@docker-gemma": { + "display:name": "Docker Model Runner", + "display:order": 3, + "ai:*": true, + "ai:baseurl": "http://localhost:12434/engines/llama.cpp/v1", + "ai:name": "ai/gemma3n:latest", + "ai:model": "ai/gemma3n:latest", + "ai:apitoken": "not_used" + } +} +``` + +Note: The `ai:apitoken` is required but can be any value as the Docker model runner ignores it. +See [Docker Model Runner docs](https://docs.docker.com/ai/model-runner/) for details how to pull models, configure model parameters like context size, and other details. + ### Azure OpenAI To connect to Azure AI services: diff --git a/docs/docs/faq.mdx b/docs/docs/faq.mdx index 08797f6908..6e1fb6d385 100644 --- a/docs/docs/faq.mdx +++ b/docs/docs/faq.mdx @@ -8,7 +8,7 @@ title: "FAQ" ### How do I configure Wave to use different AI models/providers? -Wave supports various AI providers including local LLMs (via Ollama), Azure OpenAI, Anthropic's Claude, and Perplexity. The recommended way to configure these is through AI presets, which let you set up and easily switch between different providers and models. +Wave supports various AI providers including local LLMs (via Ollama or Docker), Azure OpenAI, Anthropic's Claude, and Perplexity. The recommended way to configure these is through AI presets, which let you set up and easily switch between different providers and models. See our [AI Presets documentation](/ai-presets) for detailed setup instructions for each provider.