-
Notifications
You must be signed in to change notification settings - Fork 267
[quant] deepgemm-fp8w8a8-b128 quantize #952
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @sufubao, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces comprehensive support for FP8 quantization within the DeepGEMM framework, specifically targeting Mixture of Experts (MoE) models. It integrates a new Triton-based kernel for efficient block-wise FP8 weight quantization, refines MoE configuration handling, and sets up distributed groups for Deep Expert Parallelism, enhancing both performance and flexibility.
Highlights
- FP8 Quantization Implementation: Implemented block-wise FP8 (e4m3fn) weight quantization using a new Triton kernel (
fp8w8a8_block_quant_kernel.py
). This kernel calculates per-block scales and quantizes weights, supporting both 2D and 3D tensors. - DeepGEMM Integration: Enabled the
deepgemm_quant.py
module to utilize the new FP8 quantization kernel, moving beyond a "Not implemented" state. Theapply
method was also made more flexible to handle optional input scales. - MoE Configuration & Inference Adjustments: Updated MoE layer weight initialization to provide default values for
scoring_func
,n_group
,topk_group
, androuted_scaling_factor
. The MoE inference path inqwen3_moe/layer_infer/transformer_layer_infer.py
was simplified by removing shared expert handling and explicitly disabling grouped top-k routing. - Distributed Environment Setup: Integrated Deep Expert Parallelism (DeepEP) group initialization into the Qwen3 MoE model's custom initialization. A new
run_once
decorator was added and applied toinit_distributed_env
to ensure distributed environment setup occurs only once.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces support for deepgemm-fp8w8a8-b128
quantization. A critical bug in the quantization kernel that could lead to NaN values was identified and a fix was provided. Several medium-severity suggestions were also included to improve maintainability and robustness.
lightllm/common/quantization/triton_quant/fp8/fp8w8a8_block_quant_kernel.py
Outdated
Show resolved
Hide resolved
network_config["n_group"] = network_config.get("n_group", 0) | ||
self.num_experts_per_tok = network_config["num_experts_per_tok"] | ||
self.use_grouped_topk = network_config["n_group"] > 0 | ||
self.norm_topk_prob = network_config["norm_topk_prob"] | ||
self.n_group = network_config["n_group"] | ||
network_config["topk_group"] = network_config.get("topk_group", 0) | ||
self.topk_group = network_config["topk_group"] | ||
network_config["routed_scaling_factor"] = network_config.get("routed_scaling_factor", 0) | ||
self.routed_scaling_factor = network_config["routed_scaling_factor"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Modifying network_config
directly can cause unexpected side effects. Instead, retrieve the values and assign them to instance attributes. This prevents unintended modifications to the original dictionary.
network_config["n_group"] = network_config.get("n_group", 0) | |
self.num_experts_per_tok = network_config["num_experts_per_tok"] | |
self.use_grouped_topk = network_config["n_group"] > 0 | |
self.norm_topk_prob = network_config["norm_topk_prob"] | |
self.n_group = network_config["n_group"] | |
network_config["topk_group"] = network_config.get("topk_group", 0) | |
self.topk_group = network_config["topk_group"] | |
network_config["routed_scaling_factor"] = network_config.get("routed_scaling_factor", 0) | |
self.routed_scaling_factor = network_config["routed_scaling_factor"] | |
self.n_group = network_config.get("n_group", 0) | |
self.num_experts_per_tok = network_config["num_experts_per_tok"] | |
self.use_grouped_topk = self.n_group > 0 | |
self.norm_topk_prob = network_config["norm_topk_prob"] | |
self.topk_group = network_config.get("topk_group", 0) | |
self.routed_scaling_factor = network_config.get("routed_scaling_factor", 0) |
@@ -41,11 +41,15 @@ def __init__(self): | |||
self.act_scale_suffix = None # no support for static input tensor scale for ds model. | |||
|
|||
def quantize(self, weight: torch.Tensor): | |||
|
|||
raise Exception("Not implemented") | |||
from lightllm.common.quantization.triton_quant.fp8.fp8w8a8_block_quant_kernel import weight_quant |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return y_quant, s_scales | ||
else: | ||
y_quant, s_scales = mm_weight_quant(x, block_size) | ||
return y_quant.t(), s_scales.t() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lightllm/models/qwen3_moe/model.py
Outdated
# self.config["num_hidden_layers"] = 2 | ||
# self.config["n_layer"] = 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lightllm/utils/dist_utils.py
Outdated
def run_once(func): | ||
has_run = False | ||
def wrapper(*args, **kwargs): | ||
nonlocal has_run | ||
if not has_run: | ||
has_run = True | ||
return func(*args, **kwargs) | ||
else: | ||
return None | ||
return wrapper |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No description provided.