Skip to content

Releases: mudler/LocalAI

v3.5.4

20 Sep 07:49
f7f26b8
Compare
Choose a tag to compare

What's Changed

Bug fixes 🐛

  • fix(python): make option check uniform across backends by @mudler in #6314

Other Changes

  • chore: ⬆️ Update ggml-org/whisper.cpp to 44fa2f647cf2a6953493b21ab83b50d5f5dbc483 by @localai-bot in #6317
  • chore: ⬆️ Update ggml-org/llama.cpp to f432d8d83e7407073634c5e4fd81a3d23a10827f by @localai-bot in #6316
  • docs: ⬆️ update docs version mudler/LocalAI by @localai-bot in #6315

Full Changelog: v3.5.3...v3.5.4

v3.5.3

19 Sep 17:10
c27da0a
Compare
Choose a tag to compare

What's Changed

Bug fixes 🐛

🧠 Models

  • chore(model gallery): add mistralai_magistral-small-2509 by @mudler in #6309
  • chore(model gallery): add impish_qwen_14b-1m by @mudler in #6310
  • chore(model gallery): add aquif-3.5-a4b-think by @mudler in #6311

👒 Dependencies

  • chore: ⬆️ Update ggml-org/llama.cpp to 3edd87cd055a45d885fa914d879d36d33ecfc3e1 by @localai-bot in #6308

Other Changes

Full Changelog: v3.5.2...v3.5.3

v3.5.2

18 Sep 07:37
902e47f
Compare
Choose a tag to compare

What's Changed

👒 Dependencies

  • Revert "feat(nvidia-gpu): bump images to cuda 12.8" by @mudler in #6303

Other Changes

  • docs: ⬆️ update docs version mudler/LocalAI by @localai-bot in #6305
  • chore: ⬆️ Update ggml-org/llama.cpp to 0320ac5264279d74f8ee91bafa6c90e9ab9bbb91 by @localai-bot in #6306

Full Changelog: v3.5.1...v3.5.2

v3.5.1

17 Sep 17:03
44bbf4d
Compare
Choose a tag to compare

What's Changed

Bug fixes 🐛

  • fix: make sure to turn down all processes on exit by @mudler in #6200
  • fix(p2p): automatically install llama-cpp for p2p workers by @mudler in #6199
  • Point to LocalAI-examples repo for llava by @mauromorales in #6241
  • fix: runtime capability detection for backends by @sozercan in #6149
  • fix(chat): use proper finish_reason for tool/function calling by @imkira in #6243
  • fix(rocm): Rename tag suffix for hipblas whisper build to match backend config by @KingJ in #6247
  • fix(llama-cpp): correctly calculate embeddings by @mudler in #6259

Exciting New Features 🎉

  • feat(launcher): show welcome page by @mudler in #6234
  • feat: support HF_ENDPOINT env for the HuggingFace endpoint by @qxo in #6220

🧠 Models

  • chore(model gallery): add nousresearch_hermes-4-14b by @mudler in #6197
  • chore(model gallery): add MiniCPM-V-4.5-8b-q4_K_M by @M0Rf30 in #6205
  • chore(model-gallery): ⬆️ update checksum by @localai-bot in #6211
  • feat(whisper): Add diarization (tinydiarize) by @richiejp in #6184
  • chore(model gallery): add baidu_ernie-4.5-21b-a3b-thinking by @mudler in #6267
  • chore(model gallery): add aquif-ai_aquif-3.5-8b-think by @mudler in #6269
  • chore(model gallery): add qwen3-stargate-sg1-uncensored-abliterated-8b-i1 by @mudler in #6270
  • chore(model gallery): add k2-think-i1 by @mudler in #6288
  • chore(model gallery): add holo1.5-72b by @mudler in #6289
  • chore(model gallery): add holo1.5-7b by @mudler in #6290
  • chore(model gallery): add holo1.5-3b by @mudler in #6291
  • chore(model gallery): add alibaba-nlp_tongyi-deepresearch-30b-a3b by @mudler in #6295
  • chore(model gallery): add webwatcher-7b by @mudler in #6297
  • chore(model gallery): add webwatcher-32b by @mudler in #6298
  • chore(model gallery): add websailor-32b by @mudler in #6299
  • chore(model gallery): add websailor-7b by @mudler in #6300

📖 Documentation and examples

  • chore(docs): add MacOS dmg download button by @mudler in #6233

👒 Dependencies

  • chore(deps): bump github.com/opencontainers/image-spec from 1.1.0 to 1.1.1 by @dependabot[bot] in #6223
  • chore(deps): bump actions/stale from 9.1.0 to 10.0.0 by @dependabot[bot] in #6227
  • chore(deps): bump go.opentelemetry.io/otel/exporters/prometheus from 0.50.0 to 0.60.0 by @dependabot[bot] in #6226
  • chore(deps): bump oras.land/oras-go/v2 from 2.5.0 to 2.6.0 by @dependabot[bot] in #6225
  • chore(deps): bump github.com/swaggo/swag from 1.16.3 to 1.16.6 by @dependabot[bot] in #6222
  • chore(deps): bump actions/labeler from 5 to 6 by @dependabot[bot] in #6229
  • feat(nvidia-gpu): bump images to cuda 12.8 by @mudler in #6239
  • feat(chatterbox): add MPS, and CPU, pin version by @mudler in #6242

Other Changes

  • chore: ⬆️ Update ggml-org/llama.cpp to 0fce7a1248b74148c1eb0d368b7e18e8bcb96809 by @localai-bot in #6193
  • chore: ⬆️ Update leejet/stable-diffusion.cpp to 2eb3845df5675a71565d5a9e13b7bad0881fafcd by @localai-bot in #6192
  • docs: ⬆️ update docs version mudler/LocalAI by @localai-bot in #6201
  • chore: ⬆️ Update ggml-org/llama.cpp to fb15d649ed14ab447eeab911e0c9d21e35fb243e by @localai-bot in #6202
  • Fix Typos in Docs by @alizfara112 in #6204
  • chore: ⬆️ Update ggml-org/whisper.cpp to bb0e1fc60f26a707cabf724edcf7cfcab2a269b6 by @localai-bot in #6203
  • chore: ⬆️ Update ggml-org/llama.cpp to 408ff524b40baf4f51a81d42a9828200dd4fcb6b by @localai-bot in #6207
  • chore: ⬆️ Update ggml-org/llama.cpp to c4df49a42d396bdf7344501813e7de53bc9e7bb3 by @localai-bot in #6209
  • chore: ⬆️ Update leejet/stable-diffusion.cpp to d7f430cd693f2e12ecbaa0ce881746cf305c3b1f by @richiejp in #6213
  • chore: ⬆️ Update leejet/stable-diffusion.cpp to c648001030d4c2cc7c851fdaf509ee36d642dc99 by @localai-bot in #6215
  • chore: ⬆️ Update ggml-org/llama.cpp to 3976dfbe00f02a62c0deca32c46138e4f0ca81d8 by @localai-bot in #6214
  • chore: ⬆️ Update leejet/stable-diffusion.cpp to abb115cd021fc2beed826604ed1a479b6a77671c by @localai-bot in #6236
  • chore: ⬆️ Update ggml-org/whisper.cpp to edea8a9c3cf0eb7676dcdb604991eb2f95c3d984 by @localai-bot in #6237
  • chore: ⬆️ Update leejet/stable-diffusion.cpp to b0179181069254389ccad604e44f17a2c25b4094 by @localai-bot in #6246
  • chore: ⬆️ Update ggml-org/llama.cpp to 0e6ff0046f4a2983b2c77950aa75960fe4b4f0e2 by @localai-bot in #6235
  • chore: ⬆️ Update leejet/stable-diffusion.cpp to fce6afcc6a3250a8e17923608922d2a99b339b47 by @richiejp in #6256
  • chore: ⬆️ Update ggml-org/llama.cpp to 40be51152d4dc2d47444a4ed378285139859895b by @localai-bot in #6260
  • chore: ⬆️ Update ggml-org/llama.cpp to aa0c461efe3603639af1a1defed2438d9c16ca0f by @localai-bot in #6261
  • chore(aio): upgrade minicpm-v model to latest 4.5 by @M0Rf30 in #6262
  • chore: ⬆️ Update ggml-org/llama.cpp to 0fa154e3502e940df914f03b41475a2b80b985b0 by @localai-bot in #6263
  • chore: ⬆️ Update ggml-org/llama.cpp to 6c019cb04e86e2dacfe62ce7666c64e9717dde1f by @localai-bot in #6265
  • chore: ⬆️ Update leejet/stable-diffusion.cpp to 0ebe6fe118f125665939b27c89f34ed38716bff8 by @richiejp in #6271
  • chore: ⬆️ Update ggml-org/llama.cpp to b907255f4bd169b0dc7dca9553b4c54af5170865 by @localai-bot in #6287
  • chore: ⬆️ Update ggml-org/llama.cpp to 8ff206097c2bf3ca1c7aa95f9d6db779fc7bdd68 by @localai-bot in #6292

New Contributors

Full Changelog: v3.5.0...v3.5.1

v3.5.0

03 Sep 20:23
Compare
Choose a tag to compare




🚀 LocalAI 3.5.0

Welcome to LocalAI 3.5.0! This release focuses on expanding backend support, improving usability, refining the overall experience, and keeping reducing footprint of LocalAI, to make it a truly portable, privacy-focused AI stack. We’ve added several new backends, enhanced the WebUI with new features, made significant performance improvements under the hood, and simplified LocalAI management with a new Launcher app (Alpha) available for Linux and MacOS.

TL;DR – What’s New in LocalAI 3.5.0 🎉

  • 🖼️ Expanded Backend Support: Welcome to MLX! mlx, mlx-audio, mlx-vlm are now all available in LocalAI. We also added support to WAN for video generation, and a CPU and MPS version of the diffusers backend! Now you can generate and edit images from MacOS or if you don't have any GPU (albeit slow).
  • WebUI Enhancements: Download model configurations, a manual model refresh button, streamlined error streaming during SSE events, and a stop button for running backends. Models now can also be imported and edited via the WebUI.
  • 🚀 Performance & Architecture: Whisper backend has been rewritten in Purego with integrated Voice Activity Detection (VAD) for improved efficiency and stability. Stablediffusion also benefits from the Purego conversion.
  • 🛠️ Simplified Management: New LocalAI Launcher App (Alpha) for easy installation, startup, updates, and access to the WebUI.
  • Bug Fixes & Stability: Resolutions to AMD RX 9060XT ROCm errors, libomp linking issues, model loading problems on macOS, CUDA device detection improvements, and more.
  • Enhanced support for MacOS: whisper, diffusers, llama.cpp, MLX (VLM, Audio, LLM), stable-diffusion.cpp will now work on MacOS!

What’s New in Detail

🚀 New Backends and Model Support

We've significantly expanded the range of models you can run with LocalAI!

  • mlx-audio: Bring text to life with Kokoro’s voice models on MacOS with the power of MLX!. Install with the mlx-audio backend. Example configuration:
    backend: mlx-audio
    name: kokoro-mlx
    parameters:
      model: prince-canuma/Kokoro-82M
      voice: "af_heart"
      known_usecases:
        - tts
  • mlx-vlm: Experiment with the latest VLM models. While we don't have any models in the gallery, it's really easy to configure, see #6119 for more details.
    name: mlx-gemma
    backend: mlx-vlm
    parameters:
      model: "mlx-community/gemma-3n-E2B-it-4bit"
    template:
      use_tokenizer_template: true
    known_usecases:
    - chat
  • WAN: Generate videos with Wan2.1 or Wan 2.2 models using the diffusers backend, supporting both I2V and T2V. Example configuration:
    name: wan21
    f16: true
    backend: diffusers
    known_usecases:
      - video
    parameters:
      model: Wan-AI/Wan2.1-T2V-1.3B-Diffusers
    diffusers:
      cuda: true
      pipeline_type: WanPipeline
      step: 40
    options:
        - guidance_scale:5.0
        - num_frames:81
        - torch_dtype:bf16
  • Diffusers CPU and MacOS Support: Run diffusers models directly on your CPU without a GPU or with a Mac! This opens up LocalAI to a wider range of hardware configurations.

✨ WebUI Improvements

We've added several new features to make using LocalAI even easier:

  • Download Model Config: A "Get Config" button in the model gallery lets you download a model’s configuration file without installing the full model. This is perfect for custom setups and easier integration.
  • Manual Model Refresh: A new button allows you to manually refresh the on-disk YAML configuration, ensuring the WebUI always has the latest model information.
  • Streamlined Error Handling: Errors during SSE streaming events are now displayed directly to the user, providing better visibility and debugging information.
  • Backend Stop Button: Quickly stop running backends directly from the WebUI.
Screenshot From 2025-08-15 22-25-52
  • Model import and edit: Now models can be edited and imported directly from the WebUI.
Screenshot 2025-08-14 at 22-28-59 LocalAI - Import Model Screenshot 2025-08-14 at 22-28-47 LocalAI - Edit Model gpt-oss-20b
  • Installed Backend List: Now displays installed backends in the WebUI for easier access and management.

🚀 Performance & Architecture Improvements

  • Purego Whisper Backend: The Whisper backend has been rewritten in Purego for increased performance and stability. This also includes integrated Voice Activity Detection (VAD) for detecting speech.
  • Purego Stablediffusion: Similar to Whisper, Stablediffusion has been converted to Purego, improving its overall architecture and enabling better compatibility.

🛠️ Simplified Management – Introducing the LocalAI Launcher (Alpha)

We're excited to introduce the first version of the LocalAI Launcher! This application simplifies:

  • Installation
  • Startup/Shutdown
  • Updates
  • Access to the WebUI and Application Folder
Screenshot From 2025-08-26 11-46-35 Screenshot From 2025-08-26 11-46-50 Screenshot From 2025-08-26 11-46-21

Please note: The launcher is in Alpha and may have bugs. The macOS build requires workarounds to run due to binaries not yet signed, and specific steps for running it are needed: https://discussions.apple.com/thread/253714860?answerId=257037956022#257037956022.

✅ Bug Fixes & Stability Improvements

  • AMD RX 9060XT ROCm Error: Fixed an issue causing errors with AMD RX 9060XT GPUs when using ROCm. This error, "ROCm error: invalid device function", occurred because of device function incompatibility. The fix involves updating the ROCm image and ensuring the correct GPU targets are specified during compilation. Recommended kernel versions and verification steps for GPU detection are available [here](link to troubleshooting doc if created).
  • libomp Linking: Resolved a missing libomp.so issue on macOS Docker containers.
  • macOS Model Loading: Addressed a problem where models could not be loaded on macOS. This was resolved by bundling necessary libutf8 libraries.
  • CUDA Device Detection: Improved detection of available GPU resources.
  • Flash Attention: Set auto for flash_attention in llama.cpp, allowing the system to optimize performance.

Additional Improvements

  • System Backend: Added a new "system" backend path (LOCALAI_BACKENDS_SYSTEM_PATH or via command-line arguments) defaulting to /usr/share/localai/backends. This allows specifying a read-only directory for backends, useful for package management and system-wide installations.
  • P2P Model Sync: Implemented automatic synchronization of installed models between LocalAI instances within a federation. Currently limited to models installed through the gallery, and configuration changes are not synced. Future improvements will address these limitations.
  • Diffusers Image Source Handling: Enhanced image source selection in the diffusers backend, prioritizing ref_images over src for more robust loading behavior.
  • Darwin CI Builds: Added support for building some Go-based backends (Stablediffusion and Whisper) on Darwin (macOS) in the CI pipeline.

🚨 Important Notes

  • Launcher (Alpha): The LocalAI Launcher is in its early stages of development. Please report any issues you encounter. The MacOS build requires additional steps due to code signing.
  • Model Configuration Updates: Changes to model configuration files are not currently synchronized when using P2P model sync.

The Complete Local Stack for Privacy-First AI

LocalAI Logo

LocalAI

The free, Open Source OpenAI alternative. Acts as a drop-in replacement REST API compatible with OpenAI specifications for local AI inferencing. No GPU required.

Link: https://github.com/mudler/LocalAI

LocalAGI Logo

LocalAGI

A powerful Local AI agent management platform. Serves as a drop-in replacement for OpenAI's Responses API, supercharged with advanced agentic capabilities and a no-code UI.

Link:

Read more

v3.4.0

12 Aug 07:13
b2e8b6d
Compare
Choose a tag to compare




🚀 LocalAI 3.4.0

What’s New in LocalAI 3.4.0 🎉

  • WebUI improvements: now size can be set during image generation
  • New backends: KittenTTS, kokoro and dia now are available as backends and models can be installed directly from the gallery
    Note: these backends needs to be warmed up during the first call to download the model files.
  • Support for reasoning effort in the OpenAI chat completion
  • Diffusers backend now is available for l4t images and devices
  • During backend installation from the CLI can be supplied alias and name (--alias and --name`) to override configurations
  • Backends now can be sideloaded from the system: you can drag-and-drop the backends in the backends folder and they will just work!

The Complete Local Stack for Privacy-First AI

LocalAI Logo

LocalAI

The free, Open Source OpenAI alternative. Acts as a drop-in replacement REST API compatible with OpenAI specifications for local AI inferencing. No GPU required.

Link: https://github.com/mudler/LocalAI

LocalAGI Logo

LocalAGI

A powerful Local AI agent management platform. Serves as a drop-in replacement for OpenAI's Responses API, supercharged with advanced agentic capabilities and a no-code UI.

Link: https://github.com/mudler/LocalAGI

LocalRecall Logo

LocalRecall

A RESTful API and knowledge base management system providing persistent memory and storage capabilities for AI agents. Designed to work alongside LocalAI and LocalAGI.

Link: https://github.com/mudler/LocalRecall

Thank you! ❤️

A massive THANK YOU to our incredible community and our sponsors! LocalAI has over 34,500 stars, and LocalAGI has already rocketed past 1k+ stars!

As a reminder, LocalAI is real FOSS (Free and Open Source Software) and its sibling projects are community-driven and not backed by VCs or a company. We rely on contributors donating their spare time and our sponsors to provide us the hardware! If you love open-source, privacy-first AI, please consider starring the repos, contributing code, reporting bugs, or spreading the word!

👉 Check out the reborn LocalAGI v2 today: https://github.com/mudler/LocalAGI

Full changelog 👇

👉 Click to expand 👈

What's Changed

Bug fixes 🐛

  • fix(llama.cpp): do not default to linear rope by @mudler in #5982

Exciting New Features 🎉

  • feat(webui): allow to specify image size by @mudler in #5976
  • feat(backends): add KittenTTS by @mudler in #5977
  • feat(kokoro): complete kokoro integration by @mudler in #5978
  • feat: add reasoning effort and metadata to template by @mudler in #5981
  • feat(transformers): add support to Dia by @mudler in #5991
  • feat(diffusers): add builds for nvidia-l4t by @mudler in #6004
  • feat(backends install): allow to specify name and alias during manual installation by @mudler in #5971

🧠 Models

  • chore(models): add gpt-oss-20b by @mudler in #5973
  • chore(models): add gpt-oss-120b by @mudler in #5974
  • feat(models): add support to qwen-image by @mudler in #5975
  • chore(model gallery): add openai_gpt-oss-20b-neo by @mudler in #5986
  • fix(harmony): improve template by adding reasoning effort and system_prompt by @mudler in #5985
  • chore(model gallery): add qwen_qwen3-4b-instruct-2507 by @mudler in #5987
  • chore(model gallery): add qwen_qwen3-4b-thinking-2507 by @mudler in #5988
  • chore(model gallery): add huihui-ai_huihui-gpt-oss-20b-bf16-abliterated by @mudler in #5995
  • chore(model gallery): add openai-gpt-oss-20b-abliterated-uncensored-neo-imatrix by @mudler in #5996
  • chore(model gallery): add tarek07_nomad-llama-70b by @mudler in #5997
  • chore: add Dia to the model gallery, fix backend by @mudler in #5998
  • chore(model gallery): add chatterbox by @mudler in #5999
  • chore(model gallery): add outetts by @mudler in #6000
  • chore(model gallery): add impish_nemo_12b by @mudler in #6007
  • chore(model-gallery): ⬆️ update checksum by @localai-bot in #6010

👒 Dependencies

Other Changes

  • docs: ⬆️ update docs version mudler/LocalAI by @localai-bot in #5967
  • chore: ⬆️ Update ggml-org/llama.cpp to 41613437ffee0dbccad684fc744788bc504ec213 by @localai-bot in #5968
  • chore(deps): bump torch and diffusers by @mudler in #5970
  • chore(deps): bump torch and sentence-transformers by @mudler in #5969
  • chore: ⬆️ Update ggml-org/llama.cpp to fd1234cb468935ea087d6929b2487926c3afff4b by @localai-bot in #5972
  • chore: ⬆️ Update ggml-org/llama.cpp to e725a1a982ca870404a9c4935df52466327bbd02 by @localai-bot in #5984
  • feat(swagger): update swagger by @localai-bot in #5983
  • chore: ⬆️ Update ggml-org/llama.cpp to a0552c8beef74e843bb085c8ef0c63f9ed7a2b27 by @localai-bot in #5992
  • chore: ⬆️ Update ggml-org/whisper.cpp to 4245c77b654cd384ad9f53a4a302be716b3e5861 by @localai-bot in #5993
  • docs: update links in documentation by @lnnt in #5994
  • chore: ⬆️ Update ggml-org/llama.cpp to cd6983d56d2cce94ecb86bb114ae8379a609073c by @localai-bot in #6003
  • fix(l4t-diffusers): add sentencepiece by @mudler in #6005
  • chore: ⬆️ Update ggml-org/llama.cpp to 79c1160b073b8148a404f3dd2584be1606dccc66 by @localai-bot in #6006
  • chore: ⬆️ Update ggml-org/whisper.cpp to b02242d0adb5c6c4896d59ac86d9ec9fe0d0fe33 by @localai-bot in #6009
  • chore: ⬆️ Update ggml-org/llama.cpp to be48528b068111304e4a0bb82c028558b5705f05 by @localai-bot in #6012

New Contributors

Full Changelog: v3.3.2...v3.4.0

v3.3.2

04 Aug 14:52
d6274ea
Compare
Choose a tag to compare

What's Changed

Exciting New Features 🎉

  • feat(backends): install from local path by @mudler in #5962
  • feat(backends): allow backends to not have a metadata file by @mudler in #5963

📖 Documentation and examples

  • fix(docs): Improve responsiveness of tables by @dedyf5 in #5954

👒 Dependencies

Other Changes

  • docs: ⬆️ update docs version mudler/LocalAI by @localai-bot in #5956
  • chore: ⬆️ Update ggml-org/whisper.cpp to 0becabc8d68d9ffa6ddfba5240e38cd7a2642046 by @localai-bot in #5958
  • chore: ⬆️ Update ggml-org/llama.cpp to 5c0eb5ef544aeefd81c303e03208f768e158d93c by @localai-bot in #5959
  • chore: ⬆️ Update ggml-org/llama.cpp to d31192b4ee1441bbbecd3cbf9e02633368bdc4f5 by @localai-bot in #5965

Full Changelog: v3.3.1...v3.3.2

v3.3.1

01 Aug 13:02
0b08508
Compare
Choose a tag to compare

This is a minor release, however we have addressed some important bug regarding Intel-GPU Images, and we have changed naming of the container images.

This release also adds support for Flux Kontext and Flux krea!

⚠️ Breaking change

Intel GPU images has been renamed from latest-gpu-intel-f32 and latest-gpu-intel-f16 to a single one, latest-gpu-intel, for example:

docker run -ti --name local-ai -p 8080:8080 --device=/dev/dri/card1 --device=/dev/dri/renderD128 localai/localai:latest-gpu-intel

and for AIO (All-In-One) images:

docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-gpu-intel

🖼️ Flux kontext

From this release LocalAI supports Flux Kontext and can be used to edit images via the API:

Install with:

local-ai run flux.1-kontext-dev

To test:

curl http://localhost:8080/v1/images/generations -H "Content-Type: application/json" -d '{
  "model": "flux.1-kontext-dev",
  "prompt": "change 'flux.cpp' to 'LocalAI'",
  "size": "256x256",
  "ref_images": [
  	"https://raw.githubusercontent.com/leejet/stable-diffusion.cpp/master/assets/flux/flux1-dev-q8_0.png"
  ]
}'
b64567298114 (1) b641424088517

What's Changed

Breaking Changes 🛠

  • fix(intel): Set GPU vendor on Intel images and cleanup by @richiejp in #5945

Exciting New Features 🎉

  • feat(stablediffusion-ggml): add support to ref images (flux Kontext) by @mudler in #5935

🧠 Models

  • chore(model gallery): add qwen_qwen3-30b-a3b-instruct-2507 by @mudler in #5936
  • chore(model gallery): add arcee-ai_afm-4.5b by @mudler in #5938
  • chore(model gallery): add qwen_qwen3-30b-a3b-thinking-2507 by @mudler in #5939
  • chore(model gallery): add flux.1-dev-ggml-q8_0 by @mudler in #5947
  • chore(model gallery): add flux.1-dev-ggml-abliterated-v2-q8_0 by @mudler in #5948
  • chore(model gallery): add flux.1-krea-dev-ggml by @mudler in #5949

Other Changes

  • docs: ⬆️ update docs version mudler/LocalAI by @localai-bot in #5929
  • chore: ⬆️ Update ggml-org/llama.cpp to 8ad7b3e65b5834e5574c2f5640056c9047b5d93b by @localai-bot in #5931
  • chore: ⬆️ Update leejet/stable-diffusion.cpp to f6b9aa1a4373e322ff12c15b8a0749e6dd6f0253 by @localai-bot in #5930
  • chore: ⬆️ Update ggml-org/whisper.cpp to d0a9d8c7f8f7b91c51d77bbaa394b915f79cde6b by @localai-bot in #5932
  • chore: ⬆️ Update ggml-org/llama.cpp to aa79524c51fb014f8df17069d31d7c44b9ea6cb8 by @localai-bot in #5934
  • chore: ⬆️ Update ggml-org/llama.cpp to e9192bec564780bd4313ad6524d20a0ab92797db by @localai-bot in #5940
  • chore: ⬆️ Update ggml-org/whisper.cpp to f7502dca872866a310fe69d30b163fa87d256319 by @localai-bot in #5941
  • chore: update swagger by @mudler in #5946
  • feat(stablediffusion-ggml): allow to load loras by @mudler in #5943
  • chore(capability): improve messages by @mudler in #5944
  • feat(swagger): update swagger by @localai-bot in #5950
  • chore: ⬆️ Update ggml-org/llama.cpp to daf2dd788066b8b239cb7f68210e090c2124c199 by @localai-bot in #5951

Full Changelog: v3.3.0...v3.3.1

v3.3.0

28 Jul 15:03
36179ff
Compare
Choose a tag to compare




🚀 LocalAI 3.3.0

What’s New in LocalAI 3.3.0 🎉

  • Object detection! From 3.3.0, now LocalAI supports with a new API - also fast object detection! Just install the rfdetr-base model - See the documentation to learn more
  • Backends now have defined mirrors for download - this helps when primary registries fails during download
  • Bug fixes: worked hard into squashing bugfixes in this release! Ranging from container images to backends and installation scripts

The Complete Local Stack for Privacy-First AI

LocalAI Logo

LocalAI

The free, Open Source OpenAI alternative. Acts as a drop-in replacement REST API compatible with OpenAI specifications for local AI inferencing. No GPU required.

Link: https://github.com/mudler/LocalAI

LocalAGI Logo

LocalAGI

A powerful Local AI agent management platform. Serves as a drop-in replacement for OpenAI's Responses API, supercharged with advanced agentic capabilities and a no-code UI.

Link: https://github.com/mudler/LocalAGI

LocalRecall Logo

LocalRecall

A RESTful API and knowledge base management system providing persistent memory and storage capabilities for AI agents. Designed to work alongside LocalAI and LocalAGI.

Link: https://github.com/mudler/LocalRecall

Thank you! ❤️

A massive THANK YOU to our incredible community and our sponsors! LocalAI has over 34,100 stars, and LocalAGI has already rocketed past 900+ stars!

As a reminder, LocalAI is real FOSS (Free and Open Source Software) and its sibling projects are community-driven and not backed by VCs or a company. We rely on contributors donating their spare time and our sponsors to provide us the hardware! If you love open-source, privacy-first AI, please consider starring the repos, contributing code, reporting bugs, or spreading the word!

👉 Check out the reborn LocalAGI v2 today: https://github.com/mudler/LocalAGI

Full changelog 👇

👉 Click to expand 👈

What's Changed

Bug fixes 🐛

  • fix(backend gallery): intel images for python-based backends, re-add exllama2 by @mudler in #5928

Exciting New Features 🎉

Other Changes

  • docs: ⬆️ update docs version mudler/LocalAI by @localai-bot in #5920
  • chore: ⬆️ Update ggml-org/whisper.cpp to e7bf0294ec9099b5fc21f5ba969805dfb2108cea by @localai-bot in #5922
  • chore: ⬆️ Update ggml-org/llama.cpp to 11dd5a44eb180e1d69fac24d3852b5222d66fb7f by @localai-bot in #5921
  • chore: drop assistants endpoint by @mudler in #5926
  • chore: ⬆️ Update ggml-org/llama.cpp to bf78f5439ee8e82e367674043303ebf8e92b4805 by @localai-bot in #5927

Full Changelog: v3.2.3...v3.3.0

v3.2.3

26 Jul 06:31
a8057b9
Compare
Choose a tag to compare

What's Changed

Bug fixes 🐛

  • fix(cuda): be consistent with image tag naming by @mudler in #5916

📖 Documentation and examples

  • chore(docs): add documentation on backend detection override by @mudler in #5915

Other Changes

  • chore: ⬆️ Update ggml-org/llama.cpp to c7f3169cd523140a288095f2d79befb20a0b73f4 by @localai-bot in #5913

Full Changelog: v3.2.2...v3.2.3