-
Notifications
You must be signed in to change notification settings - Fork 3.1k
wip: Remove redundant fill_(0) in dp_scatter #11683
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @ch-wan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request focuses on a minor optimization within the distributed attention mechanism by removing a redundant Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request removes a fill_(0)
call in dp_scatter
, with the goal of removing a redundant operation. However, this change could introduce risks. My review points out that removing this initialization may leave parts of an output tensor uninitialized, which could lead to non-deterministic behavior or incorrect calculations if that memory is accessed later. I have requested clarification on why this operation is considered safe to remove.
# since local_tokens may be padded for cuda graph | ||
local_start_pos, local_num_tokens = get_dp_local_info(forward_batch) | ||
|
||
local_tokens.fill_(0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removing local_tokens.fill_(0)
could be risky. The comment on lines 491-492 indicates that local_tokens
may be padded, meaning local_tokens.shape[0]
can be larger than the actual number of tokens, local_num_tokens
. The memcpy_triton
call that follows only writes to the first local_num_tokens
rows of the local_tokens
tensor.
This leaves the padded portion of the tensor with uninitialized data from torch.empty
. If any subsequent operation reads this padded area, it could lead to incorrect and non-deterministic behavior, which can be very difficult to debug.
For context, _dp_gather_via_all_reduce
explicitly zero-fills global_tokens
before a partial copy, which is necessary for the correctness of the subsequent all-reduce operation. While local_tokens
is an output parameter here and not immediately reduced, leaving parts of it uninitialized is generally unsafe unless there's a strict contract with all callers that they will only ever use the [:local_num_tokens]
slice.
Could you please clarify why this fill_(0)
is considered redundant? If it's safe to remove, it would be beneficial to add a comment explaining the invariant that guarantees the padded area is never read.
Motivation
Modifications
Accuracy Tests
Benchmarking and Profiling
Checklist