Skip to content

[quant] deepgemm-fp8w8a8-b128 quantize #952

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 8 commits into from
Jul 2, 2025
Merged

Conversation

sufubao
Copy link
Collaborator

@sufubao sufubao commented Jul 2, 2025

No description provided.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @sufubao, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces comprehensive support for FP8 quantization within the DeepGEMM framework, specifically targeting Mixture of Experts (MoE) models. It integrates a new Triton-based kernel for efficient block-wise FP8 weight quantization, refines MoE configuration handling, and sets up distributed groups for Deep Expert Parallelism, enhancing both performance and flexibility.

Highlights

  • FP8 Quantization Implementation: Implemented block-wise FP8 (e4m3fn) weight quantization using a new Triton kernel (fp8w8a8_block_quant_kernel.py). This kernel calculates per-block scales and quantizes weights, supporting both 2D and 3D tensors.
  • DeepGEMM Integration: Enabled the deepgemm_quant.py module to utilize the new FP8 quantization kernel, moving beyond a "Not implemented" state. The apply method was also made more flexible to handle optional input scales.
  • MoE Configuration & Inference Adjustments: Updated MoE layer weight initialization to provide default values for scoring_func, n_group, topk_group, and routed_scaling_factor. The MoE inference path in qwen3_moe/layer_infer/transformer_layer_infer.py was simplified by removing shared expert handling and explicitly disabling grouped top-k routing.
  • Distributed Environment Setup: Integrated Deep Expert Parallelism (DeepEP) group initialization into the Qwen3 MoE model's custom initialization. A new run_once decorator was added and applied to init_distributed_env to ensure distributed environment setup occurs only once.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for deepgemm-fp8w8a8-b128 quantization. A critical bug in the quantization kernel that could lead to NaN values was identified and a fix was provided. Several medium-severity suggestions were also included to improve maintainability and robustness.

Comment on lines 91 to 99
network_config["n_group"] = network_config.get("n_group", 0)
self.num_experts_per_tok = network_config["num_experts_per_tok"]
self.use_grouped_topk = network_config["n_group"] > 0
self.norm_topk_prob = network_config["norm_topk_prob"]
self.n_group = network_config["n_group"]
network_config["topk_group"] = network_config.get("topk_group", 0)
self.topk_group = network_config["topk_group"]
network_config["routed_scaling_factor"] = network_config.get("routed_scaling_factor", 0)
self.routed_scaling_factor = network_config["routed_scaling_factor"]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Modifying network_config directly can cause unexpected side effects. Instead, retrieve the values and assign them to instance attributes. This prevents unintended modifications to the original dictionary.

Suggested change
network_config["n_group"] = network_config.get("n_group", 0)
self.num_experts_per_tok = network_config["num_experts_per_tok"]
self.use_grouped_topk = network_config["n_group"] > 0
self.norm_topk_prob = network_config["norm_topk_prob"]
self.n_group = network_config["n_group"]
network_config["topk_group"] = network_config.get("topk_group", 0)
self.topk_group = network_config["topk_group"]
network_config["routed_scaling_factor"] = network_config.get("routed_scaling_factor", 0)
self.routed_scaling_factor = network_config["routed_scaling_factor"]
self.n_group = network_config.get("n_group", 0)
self.num_experts_per_tok = network_config["num_experts_per_tok"]
self.use_grouped_topk = self.n_group > 0
self.norm_topk_prob = network_config["norm_topk_prob"]
self.topk_group = network_config.get("topk_group", 0)
self.routed_scaling_factor = network_config.get("routed_scaling_factor", 0)

@@ -41,11 +41,15 @@ def __init__(self):
self.act_scale_suffix = None # no support for static input tensor scale for ds model.

def quantize(self, weight: torch.Tensor):

raise Exception("Not implemented")
from lightllm.common.quantization.triton_quant.fp8.fp8w8a8_block_quant_kernel import weight_quant

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider moving this import to the top of the file for better code organization and to avoid potential performance implications of local imports.

return y_quant, s_scales
else:
y_quant, s_scales = mm_weight_quant(x, block_size)
return y_quant.t(), s_scales.t()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The function returns transposed tensors for 2D inputs but not for 3D inputs. Add a comment explaining why the transpose is necessary for 2D tensors but not for 3D tensors.

Comment on lines 28 to 29
# self.config["num_hidden_layers"] = 2
# self.config["n_layer"] = 2

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Remove the commented-out code to keep the codebase clean.

Comment on lines 80 to 89
def run_once(func):
has_run = False
def wrapper(*args, **kwargs):
nonlocal has_run
if not has_run:
has_run = True
return func(*args, **kwargs)
else:
return None
return wrapper

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The run_once decorator is not thread-safe. Use a threading.Lock to avoid race conditions.

sufubao and others added 5 commits July 2, 2025 11:24
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@hiworldwzj hiworldwzj merged commit 81c5f61 into main Jul 2, 2025
1 check passed
@hiworldwzj hiworldwzj deleted the qwen3_deepgemm_quant branch July 2, 2025 06:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants