-
Notifications
You must be signed in to change notification settings - Fork 22
very early start of adding Open AI API support #126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
very early start of adding Open AI API support #126
Conversation
UNTESTED Since both Docker Model Runner (DMR) and Open AI use the same API endpoints, I wanted to help out adding support to this project. I love self hosting but Ollama has a lot of issues. I think DMR could be a great replacement since it could be already included in the Docker compose. I did not test this yet because My machine is still busy testing other things.
|
Thanks for the contribution to AudioMuseAI. I updated the smoketest that you can find in /test can you run it and shae the result? they should take around 15-20 minute. Also can you also add a docker-compose-openai.yaml file to deploy AudioMuse-AI with the AI and a model ready to use? (let's say Mistral:7b ?) So that will be easy to be test and used too. |
I renamed the functions and variables to be easier to understand and follow: One can either use self-hosted AI Server Pen-AI-API compatible (such as Local-AI, Docker Model Runner, vLLM, OpenLLM, etc..) Or use Open AI Servers by not providing a URL ( in that case a valid API Key is required)
|
Hi, I don't understand if it's ready or you're still working on it, because this is a PR open and not just a draft. Anyway I recompiled the image, got your docker-compose.yaml example where I just put the Jellyfin information + the OpenAI API Token, and it give error. I didn't give any url or so because I understood the default url for the cloud API is already there. Please let me clearly when do you think this PR is ready to be tested and merged, and share how. I understood that both with a selfhosted model and with on internet API should be possible, so please share example on both. Also when finished please run the smoke test in /test and share the result. Thanks! |
|
I definitely still need time to make a working PR. I am still testing functionality and I will definitely let you know when it is ready. I solved the API issues but now working on default values and correctly naming functions. |
Use Open AI SDK for API calls Still issues with API key. still debuging
To test this make sure Docker is up to date and that docker-model-plugin is installed. I added the correct configurations to access LLM Model from inside the container. It is OpenAI API compatible. I used OpenAI SDK Python Library to impliment the function. The LLM included is about 500mb and on a labtop processor i5 it took around 20s to return a result.
Updated the placeholder for OpenAI Base URL to reflect the correct endpoint for chat completions. BaseURL should be empty to use OpenAI API. Or a URL to v1/chat/completions api endpoint is to be provided. In the openai docker compose yaml i provided the DMR url: "http://172.17.0.1:12434/engines/llama.cpp/v1/chat/completions"
|
I try to spin up AudioMuse-AI with your code and your docker-compose.yaml example but I have issue: Searching around it seems that this is because model runner is a feature for Docker Desktop. If this is correct is not good because AudioMuse-AI is primary a software for homelab that should work on headless server. I try to remove the model integrated part, trying to run it with exeternal API, but now in Please review the docker compose file and/or provide a version that could work also without the integrated model. Thanks! |
|
To use Docker Model Runner you have to update Docker and install Using |
still trying to figure out how AI Clients were implimented
…1989/AudioMuse-AI into Open-AI-and-DMR-support
I replicate exact implimentation of Gemini and Ollama hoping that solves the issues. It is not clear how API keys are sent from UI



Since both Docker Model Runner (DMR) and Open AI use the same API endpoints, I wanted to help out adding support to this project. I love self hosting but Ollama has a lot of issues. I think DMR could be a great replacement since it could be already included in the Docker compose. I did not test this yet because My machine is still busy testing other things.
I did changes that could affect other functionalities so please do a thural review to make suer it is going in the right direction