diff --git a/docs/prompt_engineering/how_to_guides/custom_openai_compliant_model.mdx b/docs/prompt_engineering/how_to_guides/custom_openai_compliant_model.mdx index 5cc5ed7a2..c921fff98 100644 --- a/docs/prompt_engineering/how_to_guides/custom_openai_compliant_model.mdx +++ b/docs/prompt_engineering/how_to_guides/custom_openai_compliant_model.mdx @@ -4,25 +4,36 @@ sidebar_position: 2 # Run the playground against an OpenAI-compliant model provider/proxy -The LangSmith playground allows you to use any model that is compliant with the OpenAI API. You can utilize your model by setting the Proxy Provider for `OpenAI` in the playground. +The LangSmith playground allows you to use any model that is compliant with the OpenAI API. -## Deploy an OpenAI compliant model +## Deploy an OpenAI-compliant model -Many providers offer OpenAI compliant models or proxy services. Some examples of this include: +Many providers offer OpenAI-compliant models or proxy services that wrap existing models with an OpenAI-compatible API. Some popular options include: - [LiteLLM Proxy](https://github.com/BerriAI/litellm?tab=readme-ov-file#quick-start-proxy---cli) - [Ollama](https://ollama.com/) -You can use these providers to deploy your model and get an API endpoint that is compliant with the OpenAI API. - -Take a look at the full [specification](https://platform.openai.com/docs/api-reference/chat) for more information. +These tools allow you to deploy models with an API endpoint that follows the OpenAI specification. For implementation details, refer to the [OpenAI API documentation](https://platform.openai.com/docs/api-reference/chat). ## Use the model in the LangSmith Playground -Once you have deployed a model server, you can use it in the LangSmith Playground. Enter the playground and select the `Proxy Provider` inside the `OpenAI` modal. +Once you have deployed a model server, you can use it in the LangSmith Playground. + +### Configure the playground + +1. Open the LangSmith Playground +2. Change the provider to `Custom Model Endpoint` +3. Enter your model's endpoint URL in the `Base URL` field +4. Configure any additional configuration parameters + +![Custom Model Endpoint](./static/custom_model_endpoint.png) + +The playground uses [`ChatOpenAI`](https://python.langchain.com/api_reference/openai/chat_models/langchain_openai.chat_models.base.ChatOpenAI.html) from `langchain-openai` under the hood, automatically configuring it with your custom endpoint as the `base_url`. + +### Testing the connection -![OpenAI Proxy Provider](./static/openai_proxy_provider.png) +Click `Start` to test the connection. If properly configured, you should see your model's responses appear in the playground. You can then experiment with different prompts and parameters. -If everything is set up correctly, you should see the model's response in the playground. You can also use this functionality to invoke downstream pipelines as well. +## Save your model configuration -See how to store your model configuration for later use [here](./managing_model_configurations). +To reuse your custom model configuration in future sessions, learn how to save and manage your settings [here](./managing_model_configurations). diff --git a/docs/prompt_engineering/how_to_guides/static/custom_model_endpoint.png b/docs/prompt_engineering/how_to_guides/static/custom_model_endpoint.png new file mode 100644 index 000000000..9be1d95c9 Binary files /dev/null and b/docs/prompt_engineering/how_to_guides/static/custom_model_endpoint.png differ