Skip to content

Conversation

ch-wan
Copy link
Collaborator

@ch-wan ch-wan commented Oct 15, 2025

Motivation

Modifications

Accuracy Tests

Benchmarking and Profiling

Checklist

Copy link
Contributor

Summary of Changes

Hello @ch-wan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on a minor optimization within the distributed attention mechanism by removing a redundant fill_(0) operation in the dp_scatter function. This change enhances code efficiency and clarity without impacting the functional correctness or behavior of the system.

Highlights

  • Redundant Operation Removal: The local_tokens.fill_(0) call within the dp_scatter function in python/sglang/srt/layers/dp_attention.py has been removed as it was found to be redundant, streamlining the code.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request removes a fill_(0) call in dp_scatter, with the goal of removing a redundant operation. However, this change could introduce risks. My review points out that removing this initialization may leave parts of an output tensor uninitialized, which could lead to non-deterministic behavior or incorrect calculations if that memory is accessed later. I have requested clarification on why this operation is considered safe to remove.

# since local_tokens may be padded for cuda graph
local_start_pos, local_num_tokens = get_dp_local_info(forward_batch)

local_tokens.fill_(0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Removing local_tokens.fill_(0) could be risky. The comment on lines 491-492 indicates that local_tokens may be padded, meaning local_tokens.shape[0] can be larger than the actual number of tokens, local_num_tokens. The memcpy_triton call that follows only writes to the first local_num_tokens rows of the local_tokens tensor.

This leaves the padded portion of the tensor with uninitialized data from torch.empty. If any subsequent operation reads this padded area, it could lead to incorrect and non-deterministic behavior, which can be very difficult to debug.

For context, _dp_gather_via_all_reduce explicitly zero-fills global_tokens before a partial copy, which is necessary for the correctness of the subsequent all-reduce operation. While local_tokens is an output parameter here and not immediately reduced, leaving parts of it uninitialized is generally unsafe unless there's a strict contract with all callers that they will only ever use the [:local_num_tokens] slice.

Could you please clarify why this fill_(0) is considered redundant? If it's safe to remove, it would be beneficial to add a comment explaining the invariant that guarantees the padded area is never read.

@ch-wan ch-wan changed the title Remove redundant fill_(0) in dp_scatter wip: Remove redundant fill_(0) in dp_scatter Oct 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants