-
-
Notifications
You must be signed in to change notification settings - Fork 7.8k
Fix TorchAOConfig skip layers #19265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…atures) for faster loading Signed-off-by: mobicham <[email protected]>
Signed-off-by: mobicham <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @mobicham, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
Hello team,
Gemini here with a summary of this pull request. This PR, titled "Fix TorchAOConfig skip layers", aims to address issues encountered when loading models quantized with TorchAO where not all layers have been converted. Specifically, it fixes crashes that occur when loading models that define certain modules to be skipped from quantization, either via the modules_to_not_convert
list or by setting their configuration to None
in module_fqn_to_config
. The PR ensures that vLLM correctly identifies and handles these unquantized layers during loading. Additionally, a minor optimization is included to improve the loading speed for TorchAO models by avoiding the creation of large dummy nn.Linear
modules during parameter quantization.
Highlights
- TorchAO Quantization Fix: Resolves issues loading TorchAO quantized models that contain layers explicitly marked to be skipped from quantization.
- Support for Skipped Modules: Adds logic to correctly handle modules listed in
modules_to_not_convert
and modules with aNone
configuration inmodule_fqn_to_config
, ensuring they are treated as unquantized. - Loading Speed Optimization: Improves the speed of loading TorchAO models by optimizing the creation of dummy
nn.Linear
modules used during parameter quantization. - New Test Case: Adds a test case specifically for loading a Qwen-VL model quantized with TorchAO, which helps validate the fix for models with unquantized components like vision heads.
Changelog
- tests/quantization/test_torchao.py
- Added a new test function
test_qwenvl_int8wo_model_loading_with_params
(lines 62-74) to test loading a Qwen-VL model quantized with TorchAO, targeting the scenario with potentially skipped layers.
- Added a new test function
- vllm/model_executor/layers/quantization/torchao.py
- Modified the
TorchAOConfig
constructor to accept an optionalskip_modules
list (lines 23-25, 41). - Updated the
from_config
class method to parsemodules_to_not_convert
and identify modules set toNone
inmodule_fqn_to_config
, adding them to theskip_modules
list (lines 78-85). - In
get_quant_method
, added a check to returnUnquantizedLinearMethod()
if the current module's prefix is in theskip_modules
list (lines 96-97). - Ensured the
skip_modules
list is passed down when creating nestedTorchAOConfig
instances withinget_quant_method
(line 105). - Optimized
torchao_quantize_param_data
by creating a smallnn.Linear(1, 1)
and manually settingin_features
andout_features
instead of using the full parameter shape directly (lines 129-131).
- Modified the
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request effectively addresses the issue of loading TorchAO models with partially quantized layers, particularly for Vision-Language models. The changes to handle modules_to_not_convert
and module_fqn_to_config
for skipping layers are well-implemented. The added test case for a Qwen-VL model is a good addition, and the optimization in torchao_quantize_param_data
to reduce memory allocation during dummy linear layer creation is a nice improvement.
I have one suggestion regarding the string matching logic for skipping modules, which could be made more robust to prevent potential over-matching. Overall, this is a valuable fix.
Summary of Findings
- Module Skipping Logic Robustness: The logic for determining whether to skip a module (
any(s in prefix for s in self.skip_modules)
) uses a general substring check. This could potentially lead to over-matching if a skip pattern is a substring of an unrelated module's FQN (e.g., skipping"layer.1"
might unintentionally affect"layer.10"
). A more precise FQN-aware prefix matching or exact matching would be more robust. - Test Coverage: A new test case (
test_qwenvl_int8wo_model_loading_with_params
) was added, which is good for verifying the fix for VL models with unquantized vision modules. - Performance Improvement: The change in
torchao_quantize_param_data
to initializenn.Linear
with minimal dimensions (1,1) and then updatein_features
andout_features
is a good optimization to reduce temporary memory allocation.
Merge Readiness
The pull request is well-structured and addresses the core issues effectively. However, there's one medium-severity concern regarding the robustness of the module skipping logic that should be discussed and potentially addressed. Once that point is clarified or resolved, the PR should be in good shape for merging. As an AI, I am not authorized to approve pull requests; please ensure further review and approval from the maintainers.
Signed-off-by: mobicham <[email protected]>
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
Signed-off-by: mobicham <[email protected]>
Signed-off-by: mobicham <[email protected]>
Signed-off-by: mobicham <[email protected]>
cc: @jerryzh168 |
@mobicham thanks for the fix, can you talk a bit more about qkv fusion that you mentioned before? still didn't quite get it |
assert isinstance(torchao_config, AOBaseConfig), f"{torchao_config}" | ||
dummy_linear = torch.nn.Linear(param.shape[1], param.shape[0], bias=False) | ||
dummy_linear = torch.nn.Linear(1, 1, bias=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am a little surprised the returned subclass actually has the right metdata for copy_ the state dict
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, as long as you set the rightin_features
/ out_features
. I have been doing this trick for a long time and it saves a lot of time especially with large layers, it can make loading time about 5-10x faster. I also use it in my vllm loading logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The q,k,v fusion issue you mentioned makes sense, does this PR fix that?
@drisspg only if:
Otherwise, there's no clean way to merge qkv if they don't have the quant settings. Moreover, the merging is not happening in TorchAOConfig, it's happening in the QKV linear modules. The main focus of this PR is to handle layer skipping for layers that were not quantized though. So it's simply checking in the config if the prefix matches the skipped layers defined in the config. |
[Follow-up to #19147 due to DCO rebasing issues]
Purpose
The goal of this small PR is to fix loading torchao models where not all the layers have been quantized.
The current implementation doesn't keep track of the skipped layers defined in
config["modules_to_not_convert"]
. As a result, quantized VL models where the vision head is not quantized results in a crash.The PR also includes logic to skip layers defined in
module_fqn_to_config
. Currently, if a module is skipped inmodule_fqn_to_config
, loading the model in vLLM would crash.Also, made a quick fix to improve loading speed by avoiding creating an
nn.Linear
with the full tensor shape.Test Plan
Dependencies
Code
Loading a VL model with unquantized vision modules
Skip module example
Test Result
The model should load successfully.
@jerryzh168