Skip to content

AccVid LoRA key error when loading with diffusers #11702

Closed
@apolinario

Description

@apolinario

Describe the bug

Loading the AccVid LoRA isn't working with diffusers. For context AccVid LoRA for Wan 2.1 - it is another LoRA extracted by Kijai from a distilled model (AccVid) - similar to CausVid

This is the error

Traceback (most recent call last):
  File "/home/user/app/app.py", line 26, in <module>
    pipe.load_lora_weights(accvid_path, adapter_name="accvid_lora")
  File "/usr/local/lib/python3.10/site-packages/diffusers/loaders/lora_pipeline.py", line 4897, in load_lora_weights
    state_dict = self.lora_state_dict(pretrained_model_name_or_path_or_dict, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
  File "/usr/local/lib/python3.10/site-packages/diffusers/loaders/lora_pipeline.py", line 4795, in lora_state_dict
    state_dict = _convert_non_diffusers_wan_lora_to_diffusers(state_dict)
  File "/usr/local/lib/python3.10/site-packages/diffusers/loaders/lora_conversion_utils.py", line 1658, in _convert_non_diffusers_wan_lora_to_diffusers
    converted_state_dict[f"blocks.{i}.ffn.{c}.lora_A.weight"] = original_state_dict.pop(
KeyError: 'blocks.0.ffn.2.lora_down.weight'

Reproduction

import torch
from diffusers import AutoencoderKLWan, WanImageToVideoPipeline, UniPCMultistepScheduler
from transformers import CLIPVisionModel
from huggingface_hub import hf_hub_download

MODEL_ID = "Wan-AI/Wan2.1-I2V-14B-480P-Diffusers"
LORA_REPO_ID = "Kijai/WanVideo_comfy"
LORA_FILENAME = "Wan21_AccVid_I2V_480P_14B_lora_rank32_fp16.safetensors"

image_encoder = CLIPVisionModel.from_pretrained(MODEL_ID, subfolder="image_encoder", torch_dtype=torch.float32)
vae = AutoencoderKLWan.from_pretrained(MODEL_ID, subfolder="vae", torch_dtype=torch.float32)
pipe = WanImageToVideoPipeline.from_pretrained(
    MODEL_ID, vae=vae, image_encoder=image_encoder, torch_dtype=torch.bfloat16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config, flow_shift=8.0)
pipe.to("cuda")

accvid_path = hf_hub_download(repo_id=LORA_REPO_ID, filename=LORA_FILENAME)
pipe.load_lora_weights(accvid_path, adapter_name="accvid_lora")
pipe.set_adapters(["accvid_lora"], adapter_weights=[0.95])
pipe.fuse_lora()

System Info

diffusers on main with latest commit 00b179fb1afc147f87bd311f03b1ef7d747e1792

Who can help?

@sayakpaul , @a-r-r-o-w

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions