SD.Next Release 10-17-2025 #4269
Locked
vladmandic
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
SD.Next Release 2025-10-17
It's been a month since the last release and number of changes is yet again massive with over 300 commits!
Highlight are:
if you have a compatible GPU, performance gains are significant!
a lot of new stuff with Qwen-Image-Edit including multi-image edits and distilled variants,
new Flux, WAN, LTX, HiDream variants, expanded Nunchaku support and new SOTA upscaler with SeedVR2
plus improved video support in general, including new methods of video encoding
new SVD-style quantization using SDNQ offers almost zero-loss even with 4bit quantization
and now you can also test your favorite quantization on-the-fly and then save/load model for future use
torch-cpuoperations, improved previews, etc.Details for 2025-10-17
available for text-to-image and text-to-video and image-to-video workflows
updated version of Qwen Image Edit with improved image consistency
pruned versions of Qwen with 13B params instead of 20B, with some quality tradeoff
SRPO is trained by Tencent with specific technique: directly aligning the full diffusion trajectory with fine-grained human preference
impact of nunchaku engine on unet-based model such as sdxl is much less than on a dit-based models, but its still significantly faster than baseline
note that nunchaku optimized and pre-quantized unet is replacement for base unet, so its only applicable to base models, not any of fine-tunes
how to use: enable nunchaku in settings -> quantization and then load either sdxl-base or sdxl-base-turbo reference models
updated version of HiDream-E1 image editing model
updated version of LTXVideo t2v/i2iv model
originally designed for video restoration, seedvr works great for image detailing and upscaling!
available in 3B, 7B and 7B-sharp variants, use as any other upscaler!
note: seedvr is a very large model (6.4GB and 16GB respectively) and not designed for lower-end hardware, quantization is highly recommended
note: seedvr is highly sensitive to its cfg scale, set in settings -> postprocessing
lower values will result in smoother output while higher values add details
experimental: X-omni is a transformer-only discrete auto-regressive image generative model trained with reinforcement learning
why? SD.Next always prefers to start with full model and quantize on-demand during load
however, when you find your exact preferred quantization settings that work well for you,
saving such model as a new model allows for faster loads and reduced disk space usage
so its best of both worlds: you can experiment and test different quantization methods and once you find the one that works for you, save it as a new model
saved models appear in network tab as normal models and can be loaded as such
available in models tab
requires qwen-image-edit-2509 or its variant as multi-image edits are not available in original qwen-image
in ui control tab: inputs -> separate init image
add image for input media and control media
can be
cache-dit is a unified, flexible and training-free cache acceleration framework
compatible with many dit-based models such as FLUX.1, Qwen, HunyuanImage, Wan2.2, Chroma, etc.
enable in settings -> pipeline modifiers -> cache-dit
automatically enabled if loaded model is FLUX.1 with Nunchaku engine enabled and when PulID script is enabled
if you're working from location with limited access to huggingface, you can now specify a mirror site
for example enter,
https://hf-mirror.comsupport for both official torch preview release of
torch-rocmfor windows and TheRock unofficialtorch-rocmbuilds for windowsnote that rocm for windows is still in preview and has limited gpu support, please check rocm docs for details
torch-directmlreceived no updates in over 1 year and its currently superseded byrocmorzluda--use-zludaand--use-rocmwill attempt desired operation or fail if not possiblepreviously sdnext was performing a fallback to
torch-cpuwhich is not desired--use-cudaor--use-rocmare specified andtorch-cpuis installed, installer will attempt to reinstall correct torch packagetorch-cpuis installedtorch==2.10-nightlywithcuda==13.0was a high-value built-in extension, but it has not been maintained for 1.5 years
it also does not work with control and video tabs which are the core of sdnext nowadays
so it has been removed from built-in extensions: manual installation is still possible
create heatmap visualizations of which parts of the prompt influenced which parts of the image
available in scripts for sdxl text-to-image workflows
torch.streamsenable in settings -> model offloading
in settings -> model offloading -> model types not to offload
main logo in top-left corner now indicates server connection status and hovering over it shows connection details
SVDQuantquantization method supportmatmulsupport for RDNA2 GPUs via tritonmatmulperformance on Intel GPUskeep incomplete setting is now save interrupted
debug,docsandapi-docsby defaultlogging frequency can be specified using
--monitor xcommand line param, where x is number of seconds/sdapi/v1/civitaito trigger civitai models metadata updateaccepts optional
pageparameter to search specific networks pagepynvmlwithnvidia-ml-pyfor gpu monitoringenable in settings -> compute settings
can be combined with sdp, enabling may improve stability when used on iGPU or shared memory systems
1.0.1and enhance installer/docsendpoint style[epoch]to filename template[seq]for filename template is now higher of largest previous sequence or number of files in folderalso avoids creation of temporary files for each frame unless user wants to save them
newcommand line flag enables newpydanticandalbumentationspackagesonly compatible with some pipelines, invalidates preview generation
allows for using many different guidance methods:
CFG, CFGZero, PAG, APG, SLG, SEG, TCFG, FDG
note this will trigger download of the new variant of the model, feel free to delete older variant in
huggingfacefolderBeta Was this translation helpful? Give feedback.
All reactions