Skip to content

cuda graph pool with LRU #964

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

cuda graph pool with LRU #964

wants to merge 2 commits into from

Conversation

STwangyingrui
Copy link
Collaborator

No description provided.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @STwangyingrui, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the management of CUDA graphs by introducing an LRU-based caching mechanism. This change allows the system to intelligently manage GPU memory by retaining frequently used graphs while evicting less used ones when a configurable pool size limit is reached. It also refines the graph capture process and adds runtime checks for improved stability.

Highlights

  • CUDA Graph Pool Management: Implemented a Least Recently Used (LRU) eviction policy for the CUDA graph pool. This ensures that when the pool reaches its maximum capacity, the least recently used graphs are automatically removed to free up memory, improving resource management.
  • Configurable Pool Size: Introduced a new configurable parameter, --max_graph_pool_size, allowing users to specify the maximum number of CUDA graphs to retain in the pool. The default value is set to 16.
  • Updated Graph Capture Logic: Modified the internal logic that determines when a new CUDA graph needs to be captured. Instead of a need_capture method, the system now directly queries the graph pool using get_graph, which also handles updating the recency of accessed graphs.
  • Runtime Assertions for Robustness: Added assert statements to ensure that a valid find_graph_batch_size is always determined before proceeding with CUDA graph operations, enhancing the robustness of the system.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The code changes implement a CUDA graph pool with LRU eviction strategy to manage CUDA graphs for different batch sizes. This involves adding a new max_graph_pool_size parameter, using an OrderedDict to track graph usage, and implementing get_graph and evict_oldest_graph methods for LRU management. The changes also include assertions and comments to improve code clarity and robustness.

@@ -346,6 +346,8 @@ def _decode(
) -> ModelOutput:
if self.graph is not None and self.graph.can_run(model_input.batch_size, model_input.max_len_in_batch):
find_graph_batch_size = self.graph.find_closest_graph_batch_size(model_input.batch_size)
assert find_graph_batch_size is not None

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Adding an assertion here is good for debugging, but it's better to handle the case where find_graph_batch_size is None more gracefully. Consider raising a more informative exception or logging a warning and falling back to a non-graph execution path.

if find_graph_batch_size is None:
    logger.warning("No suitable graph batch size found, falling back to non-graph execution.")
    return self._token_forward(model_input.input_ids, self._create_inferstate(model_input))

@@ -356,7 +358,7 @@ def _decode(
)
infer_state.init_some_extra_state(self, padded_model_input.input_ids)

if self.graph.need_capture(find_graph_batch_size):
if self.graph.get_graph(find_graph_batch_size) is None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The condition self.graph.get_graph(find_graph_batch_size) is None is used to determine if a new graph needs to be captured. Consider adding a comment explaining why get_graph is used here instead of the previous need_capture function, and why checking for None is the appropriate way to determine if a new graph is needed.

# Check if a graph needs to be captured. get_graph returns None if a graph for the batch_size doesn't exist.
            if self.graph.get_graph(find_graph_batch_size) is None:

@@ -497,6 +499,8 @@ def microbatch_overlap_decode(self, model_input0: ModelInput, model_input1: Mode

if self.graph is not None and self.graph.can_run(origin_batch_size, max_len_in_batch):
find_graph_batch_size = self.graph.find_closest_graph_batch_size(origin_batch_size)
assert find_graph_batch_size is not None

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to the assertion in _decode, consider handling the case where find_graph_batch_size is None more gracefully in microbatch_overlap_decode. Provide a fallback mechanism or raise a more informative exception.

if find_graph_batch_size is None:
    logger.warning("No suitable graph batch size found, falling back to non-graph execution.")
    return self._token_forward(model_input0.input_ids, self._create_inferstate(model_input0, 0))

@@ -516,7 +520,7 @@ def microbatch_overlap_decode(self, model_input0: ModelInput, model_input1: Mode
)
infer_state1.init_some_extra_state(self, padded_model_input1.input_ids)

if self.graph.need_capture(find_graph_batch_size):
if self.graph.get_graph(find_graph_batch_size) is None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to the _decode function, add a comment explaining why get_graph is used here instead of the previous need_capture function, and why checking for None is the appropriate way to determine if a new graph is needed.

# Check if a graph needs to be captured. get_graph returns None if a graph for the batch_size doesn't exist.
            if self.graph.get_graph(find_graph_batch_size) is None:

Comment on lines 56 to 58
if batch_size in self.graph:
find_graph = self.graph.pop(batch_size)
self.graph[batch_size] = find_graph

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The code retrieves and re-inserts the graph to update its position in the LRU. Consider adding a comment to explain the purpose of this operation, specifically mentioning that it's for LRU management.

find_graph = self.graph.pop(batch_size) # Dequeue the graph
            self.graph[batch_size] = find_graph  # Enqueue the graph for LRU

Comment on lines 65 to 67
oldest_batch_size, oldest_graph = self.graph.popitem(last=False)
del oldest_graph
torch.cuda.empty_cache()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

When evicting the oldest graph, it's good practice to explicitly set the graph object to None after deleting it to release the memory immediately and avoid potential use-after-free issues. Also, consider logging the batch size of the evicted graph for debugging purposes.

oldest_batch_size, oldest_graph = self.graph.popitem(last=False)
            del oldest_graph
            oldest_graph = None # Explicitly set to None
            torch.cuda.empty_cache()
            logger.info(f"Evicted CUDA graph for batch size: {oldest_batch_size}")

@@ -84,6 +99,7 @@ def _capture_decode(self, decode_func, input_ids: torch.Tensor, infer_state: Inf
with lightllm_capture_graph(dist_group):
with torch.cuda.graph(graph_obj, pool=self.mempool):
model_output = decode_func(input_ids, infer_state)
# we assume batch_size is already dealed with find_closest_graph_batch_size outside

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The comment "we assume batch_size is already dealed with find_closest_graph_batch_size outside" is a bit vague. Clarify what "dealed with" means in this context. Specifically, state that the batch size has already been adjusted to the closest supported graph batch size.

# We assume batch_size has already been adjusted to the closest supported graph batch size

@@ -113,6 +132,7 @@ def _capture_decode_overlap(
with lightllm_capture_graph(dist_group):
with torch.cuda.graph(graph_obj, pool=self.mempool):
model_output, model_output1 = decode_func(input_ids, infer_state, input_ids1, infer_state1)
# we assume batch_size is already dealed with find_closest_graph_batch_size outside

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to the single decode case, clarify what "dealed with" means in this context. Specifically, state that the batch size has already been adjusted to the closest supported graph batch size.

# We assume batch_size has already been adjusted to the closest supported graph batch size

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant