You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
i don't know how to soolve this
Using decoupled weight decay
Downloading data: 63%|██████▎ | 17/27 [00:01<00:00, 20.77files/s]Downloading data: 100%|██████████| 27/27 [00:01<00:00, 30.40files/s]Downloading data: 100%|██████████| 27/27 [00:01<00:00, 21.50files/s]
Generating train split: 0%| | 0/26 [00:00<?, ? examples/s]Generating train split: 100%|██████████| 26/26 [00:00<00:00, 1906.90 examples/s]
Caching latents: 98%|█████████▊| 49/50 [00:12<00:00, 4.04it/s]Caching latents: 100%|██████████| 50/50 [00:12<00:00, 4.04it/s]Caching latents: 100%|██████████| 50/50 [00:12<00:00, 3.88it/s]
10/07/2025 16:36:22 - INFO - main - ***** Running training *****
10/07/2025 16:36:22 - INFO - main - Num examples = 100
10/07/2025 16:36:22 - INFO - main - Num batches each epoch = 50
10/07/2025 16:36:22 - INFO - main - Num Epochs = 20
10/07/2025 16:36:22 - INFO - main - Instantaneous batch size per device = 2
10/07/2025 16:36:22 - INFO - main - Total train batch size (w. parallel, distributed & accumulation) = 2
10/07/2025 16:36:22 - INFO - main - Gradient Accumulation steps = 1
10/07/2025 16:36:22 - INFO - main - Total optimization steps = 1000
Steps: 0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last):
File "/root/diffusers/examples/advanced_diffusion_training/train_dreambooth_lora_flux_advanced.py", line 2476, in
main(args)
File "/root/diffusers/examples/advanced_diffusion_training/train_dreambooth_lora_flux_advanced.py", line 2262, in main
(weighting.float() * (model_pred_prior.float() - target_prior.float()) ** 2).reshape(
~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
RuntimeError: The size of tensor a (4) must match the size of tensor b (2) at non-singleton dimension 0
Steps: 0%| | 0/1000 [00:02<?, ?it/s]
Traceback (most recent call last):
File "/usr/local/bin/accelerate", line 10, in
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.12/site-packages/accelerate/commands/accelerate_cli.py", line 50, in main
args.func(args)
File "/usr/local/lib/python3.12/site-packages/accelerate/commands/launch.py", line 1235, in launch_command
simple_launcher(args)
File "/usr/local/lib/python3.12/site-packages/accelerate/commands/launch.py", line 823, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/local/bin/python', '/root/diffusers/examples/advanced_diffusion_training/train_dreambooth_lora_flux_advanced.py', '--mixed_precision=bf16', '--pretrained_model_name_or_path=black-forest-labs/FLUX.1-dev', '--dataset_name=Buattensorart2/r41s4', '--output_dir=/root/output_lora', '--instance_prompt=a photo of a r4is4 woman', '--image_column=image', '--caption_column=caption', '--class_prompt=a photo of a woman', '--class_data_dir=/root/class_imgs', '--resolution=1024', '--train_batch_size=1', '--with_prior_preservation', '--prior_loss_weight=1.0', '--optimizer=prodigy', '--gradient_accumulation_steps=1', '--learning_rate=1', '--lr_scheduler=constant', '--rank=32', '--lr_warmup_steps=0', '--max_train_steps=1000', '--checkpointing_steps=200', '--seed=230112', '--allow_tf32', '--gradient_checkpointing', '--cache_latents', '--train_batch_size=2', '--prior_generation_precision=bf16']' returned non-zero exit status 1.
Beta Was this translation helpful? Give feedback.
All reactions