Skip to content

support hf_quantizer in cache warmup. #12043

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Aug 14, 2025
Merged

Conversation

sayakpaul
Copy link
Member

What does this PR do?

Takes the warmup function close to https://github.com/huggingface/transformers/blob/d3b8627b56caa7ca8fac113c9f28d0256db0194d/src/transformers/modeling_utils.py#L5969

I have gone ahead and also run a snippet from #11904 (comment) and noticed similar timings. So, running the snippet main and this PR branch should yield similar results (not identical because we cannot control it but the difference should be negligible).

@sayakpaul sayakpaul requested a review from a-r-r-o-w August 1, 2025 10:01
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Member

@a-r-r-o-w a-r-r-o-w left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, changes look good!

- Use a division factor of 4 for int8 weights
"""
# Original mapping for non-AOBaseConfig types
map_to_target_dtype = {"int4_*": 8, "int8_*": 4, "float8*": 4}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

@sayakpaul sayakpaul Aug 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Took a best guess of "8" for the unsigned int types. I think we can tackle more of these nuanced / lesser-used types as they become a bit more used. I think the int8 and fp8 types are far more common for now 👀

So, I have added a comment as well.

@sayakpaul sayakpaul requested a review from a-r-r-o-w August 6, 2025 14:00
@sayakpaul sayakpaul merged commit 58bf268 into main Aug 14, 2025
34 of 35 checks passed
@sayakpaul sayakpaul deleted the respect-hf-quantizer-cache-warmup branch August 14, 2025 13:27
@JoeGaffney
Copy link

Hey, I'm guessing this is a good optimization. But probably needs documenting for the main release.

I ran into suddenly seeing my memory max out using 4bit Bnb.

2025-08-17 13:34:00,571 - INFO - GPU Memory Usage: 23.63GB / 25.77GB,  Reserved: 24.01GB, Allocated: 12.62GB, Usage: 91.72%

Which confused me for a bit and took a while to track down if something changed in bits and bytes or elsewhere. I was using total GPU usage as a metric and possibly others may use this as saturation guide. As its what's reported on OS and container level.

Cheers,
Joe

@sayakpaul
Copy link
Member Author

This PR didn't really add any logging for what you're commenting about.

@JoeGaffney
Copy link

Thinking about this bit more is does not really make sense to fully saturate the reserved buffer should be something like 1.1 1.05 x of what is needed. Usually the total Allocated would sit pretty close to the model size.

For example here the actually usage was 12GB no where near the reserved.

This is sort of the point of people using the quantization they don't want to fully fill there memory. I know allocated and reserved is different could we run into problems with other processes on the machine needs some gpu memory would it cause a problem.

@JoeGaffney
Copy link

JoeGaffney commented Aug 17, 2025

This PR didn't really add any logging for what you're commenting about.

Hey, this was from my own code using the latest main whit is this is now merged into.

Basically what im seeing is as soon as i load a quantized model with BnB 4bit my gpu memory gets fully reserved.

@sayakpaul
Copy link
Member Author

I don't think reserved will cause any problem TBH. Can you check with a commit earlier than this PR and report the reserved memory?

@JoeGaffney
Copy link

I think it may also be spiking re-size bar I am seeing really large memory usage.

image

This is from the os level I am just running one flux test loading a transformer and encoder quantized. Which is in docker container.

Sure i can run from an early commit be interesting to see.

@sayakpaul
Copy link
Member Author

And let's please try keeping things as minimal as possible so that we, maintainers, can work with a minimal snippet to reproduce the potential bug.

@JoeGaffney
Copy link

JoeGaffney commented Aug 17, 2025

Sure as users we can provide examples this can take some time as often peoples code is split into more modular components and in some cases can't be publicly shared as is.

Will aim to get you something as minimal as possible.

FYI something like this changing memory allocation is prime candidate for a unit test.

I am not seeing this bahaviour one up from this
git+https://github.com/huggingface/diffusers.git@1b48db4c8fe76ffffa7382fd74d9f04d54aa5a16

4:35:02,059 - INFO - GPU Memory Usage: 16.97GB / 25.77GB,  Reserved: 15.99GB, Allocated: 12.70GB, Usage: 65.84%

And no funkiness at the OS level after the test run.
image

@sayakpaul
Copy link
Member Author

Oh indeed, thanks for confirming. Please provide a snippet when you can so that we can reproduce it minimally.

@sayakpaul
Copy link
Member Author

Cc: @asomoza as well. Could you check if you see a similar behaviour?

@JoeGaffney
Copy link

JoeGaffney commented Aug 17, 2025

Minimal example just the loading

import pytest
import torch
from diffusers import BitsAndBytesConfig, FluxTransformer2DModel


@pytest.mark.skipif(not torch.cuda.is_available(), reason="CUDA required")
def test_bnb_quantized_model_warmup():
    model_id = "black-forest-labs/FLUX.1-dev"
    torch_dtype = torch.bfloat16

    # Quantization config for 4-bit BNB
    quant_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch_dtype)

    # Load model (actual warmup path triggered internally)
    model = FluxTransformer2DModel.from_pretrained(
        model_id, subfolder="transformer", quantization_config=quant_config, torch_dtype=torch_dtype
    )

    # Check memory stats
    torch.cuda.reset_peak_memory_stats()
    mem_alloc = torch.cuda.memory_allocated()
    mem_reserved = torch.cuda.memory_reserved()
    print(f"Allocated: {mem_alloc/1e6:.1f} MB, Reserved: {mem_reserved/1e6:.1f} MB")

    # Assert some reasonable range 
    assert mem_alloc > 0, "Model should allocate some GPU memory"
    assert mem_reserved > 0, "Warmup should reserve some GPU memory"

git+https://github.com/huggingface/diffusers.git@1b48db4c8fe76ffffa7382fd74d9f04d54aa5a16

docker-compose exec gpu-workers pytest tests/general/test_memory_warmup.py -vs
=========================================================================== test session starts ============================================================================platform linux -- Python 3.11.12, pytest-8.4.1, pluggy-1.5.0 -- /opt/conda/bin/python3.11
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase(PosixPath('/app/workers/.hypothesis/examples'))
rootdir: /app/workers
configfile: pytest.ini
plugins: hypothesis-6.131.7, hydra-core-1.3.2, anyio-4.10.0
collected 1 item

Fetching 3 files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 44462.59it/s]
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:07<00:00,  2.39s/it]

Allocated: 6702.5 MB, Reserved: 6790.6 MB
PASSED

main

docker-compose exec gpu-workers pytest tests/general/test_memory_warmup.py -vs
=========================================================================== test session starts ============================================================================platform linux -- Python 3.11.12, pytest-8.4.1, pluggy-1.5.0 -- /opt/conda/bin/python3.11
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase(PosixPath('/app/workers/.hypothesis/examples'))
rootdir: /app/workers
configfile: pytest.ini
plugins: hypothesis-6.131.7, hydra-core-1.3.2, anyio-4.10.0
collected 1 item

Fetching 3 files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 36472.21it/s]
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:07<00:00,  2.36s/it]

Allocated: 6704.4 MB, Reserved: 23995.6 MB
PASSED

@JoeGaffney
Copy link

Testing a bit more with torchAO also.

BitsAndBytesConfig {
  "_load_in_4bit": true,
  "_load_in_8bit": false,
  "bnb_4bit_compute_dtype": "bfloat16",
  "bnb_4bit_quant_storage": "uint8",
  "bnb_4bit_quant_type": "nf4",
  "bnb_4bit_use_double_quant": false,
  "llm_int8_enable_fp32_cpu_offload": false,
  "llm_int8_has_fp16_weight": false,
  "llm_int8_skip_modules": null,
  "llm_int8_threshold": 6.0,
  "load_in_4bit": true,
  "load_in_8bit": false,
  "quant_method": "bitsandbytes"
}

Before moving to GPU Allocated: 6704.4 MB, Reserved: 23995.6 MB
After moving to GPU Allocated: 6704.4 MB, Reserved: 23995.6 MB
After moving to CPU Allocated: 0.0 MB, Reserved: 0.0 MB
PASSED

TorchAoConfig {
  "modules_to_not_convert": null,
  "quant_method": "torchao",
  "quant_type": "int8_weight_only",
  "quant_type_kwargs": {}
}

Before moving to GPU Allocated: 0.0 MB, Reserved: 0.0 MB
After moving to GPU Allocated: 12014.9 MB, Reserved: 12306.1 MB
After moving to CPU Allocated: 0.0 MB, Reserved: 0.0 MB
PASSED
import gc

import pytest
import torch
from diffusers import BitsAndBytesConfig, FluxTransformer2DModel, TorchAoConfig


@pytest.fixture(
    params=[
        BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16),
        TorchAoConfig("int8_weight_only"),
    ]
)
def quant_config(request):
    return request.param


def print_gpu_memory_usage(prefix=""):
    torch.cuda.reset_peak_memory_stats()
    mem_alloc = torch.cuda.memory_allocated()
    mem_reserved = torch.cuda.memory_reserved()
    print(f"{prefix} Allocated: {mem_alloc / 1e6:.1f} MB, Reserved: {mem_reserved / 1e6:.1f} MB")


@pytest.mark.skipif(not torch.cuda.is_available(), reason="CUDA required")
def test_quantized_model_warmup(quant_config):
    model_id = "black-forest-labs/FLUX.1-dev"
    torch_dtype = torch.bfloat16

    model = FluxTransformer2DModel.from_pretrained(
        model_id, subfolder="transformer", quantization_config=quant_config, torch_dtype=torch_dtype
    )
    print(str(quant_config))
    print_gpu_memory_usage("Before moving to GPU")

    model.to("cuda")
    print_gpu_memory_usage("After moving to GPU")

    mem_alloc = torch.cuda.memory_allocated()
    mem_reserved = torch.cuda.memory_reserved()
    assert mem_alloc > 0
    assert mem_reserved > 0

    model.to("cpu")
    gc.collect()
    torch.cuda.empty_cache()
    torch.cuda.ipc_collect()

    print_gpu_memory_usage("After moving to CPU")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants