Skip to content

Conversation

fzyzcjy
Copy link
Contributor

@fzyzcjy fzyzcjy commented Oct 10, 2025

Motivation

not tested yet

Modifications

Related Issues

Accuracy Test

Benchmark & Profiling

Checklist

@fzyzcjy fzyzcjy requested a review from FrankLeeeee as a code owner October 10, 2025 14:41
Copy link
Contributor

Summary of Changes

Hello @fzyzcjy, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a critical utility for managing GPU resources in CI environments. The new gpu_lock_exec.py script provides a robust mechanism to acquire exclusive access to GPUs, preventing conflicts when multiple CI jobs run concurrently. By ensuring that each job can secure its required GPU resources, this change enhances the reliability and stability of GPU-intensive CI pipelines, ultimately leading to more consistent test results and efficient resource utilization.

Highlights

  • GPU Locking Utility: Introduces a new Python script, gpu_lock_exec.py, designed to manage GPU resource allocation in CI environments using file-based locking.
  • Flexible GPU Acquisition: The script supports acquiring a specified number of available GPUs or explicitly requesting specific GPU device IDs.
  • Environment Variable Management: Automatically sets the CUDA_VISIBLE_DEVICES environment variable with the IDs of the successfully acquired GPUs before executing the target command.
  • Robustness and Debugging: Includes timeout mechanisms for lock acquisition, retries for contention, and a print-only mode to probe and display currently free GPUs without holding locks.
Ignored Files
  • Ignored by pattern: .github/workflows/** (1)
    • .github/workflows/test.yaml
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a Python script for locking GPUs in a CI environment, which is a solid approach. My review focuses on improving the script's robustness, clarity, and adherence to Python best practices. I've suggested using a context manager to prevent potential resource leaks, refining error handling with more specific exception types, improving type hints for better static analysis, and removing some redundant code. These changes will make the script more reliable and maintainable.

Comment on lines +71 to +82
try:
fd_lock = FdLock(args.lock_path_pattern, i)
fd_lock.open()
try:
fd_lock.lock()
fcntl.flock(fd_lock.fd, fcntl.LOCK_UN)
free.append(i)
except BlockingIOError:
pass
fd_lock.close()
except Exception as e:
print(f"Warning: Error while probing lock: {e}", file=sys.stderr, flush=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation for probing locks in _execute_print_only can leak file descriptors if an exception occurs after opening the file but before closing it.

To make resource management safer and more Pythonic, you can turn FdLock into a context manager. This ensures that file descriptors are always closed, even if errors occur.

First, add __enter__ and __exit__ methods to the FdLock class:

class FdLock:
    # ... existing methods ...

    def __enter__(self):
        self.open()
        return self

    def __exit__(self, exc_type, exc_val, exc_tb):
        self.close()

Then, you can simplify _execute_print_only to use a with statement, which will automatically handle opening and closing the lock file.

Suggested change
try:
fd_lock = FdLock(args.lock_path_pattern, i)
fd_lock.open()
try:
fd_lock.lock()
fcntl.flock(fd_lock.fd, fcntl.LOCK_UN)
free.append(i)
except BlockingIOError:
pass
fd_lock.close()
except Exception as e:
print(f"Warning: Error while probing lock: {e}", file=sys.stderr, flush=True)
try:
with FdLock(args.lock_path_pattern, i) as fd_lock:
try:
fd_lock.lock()
fcntl.flock(fd_lock.fd, fcntl.LOCK_UN)
free.append(i)
except BlockingIOError:
pass
except Exception as e:
print(f"Warning: Error while probing lock: {e}", file=sys.stderr, flush=True)

Comment on lines +58 to +62
if "{gpu_id}" not in args.lock_path_pattern:
raise Exception("ERROR: --lock-path-pattern must contain '{i}' placeholder.")

if not args.cmd and not args.print_only:
raise Exception("ERROR: missing command to run. Use -- before command.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The argument validation in _parse_args can be improved for clarity and correctness:

  1. The error message on line 59 contains a typo. It refers to '{i}' while the check is for '{gpu_id}'.
  2. Using the generic Exception is not ideal. Raising a more specific exception like ValueError is better practice as it allows for more granular error handling if this script is ever imported as a module.

I've suggested changes to address both points.

Suggested change
if "{gpu_id}" not in args.lock_path_pattern:
raise Exception("ERROR: --lock-path-pattern must contain '{i}' placeholder.")
if not args.cmd and not args.print_only:
raise Exception("ERROR: missing command to run. Use -- before command.")
if "{gpu_id}" not in args.lock_path_pattern:
raise ValueError("ERROR: --lock-path-pattern must contain '{gpu_id}' placeholder.")
if not args.cmd and not args.print_only:
raise ValueError("ERROR: missing command to run. Use -- before command.")

start = time.time()
_ensure_lock_files(path_pattern, total_gpus)
while True:
fd_locks: List = []
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The type hint for fd_locks is List, which is generic. It should be List[FdLock] to improve readability and allow static analysis tools to catch potential bugs. Since FdLock is defined later in the file, you'll need to use a string forward reference.

Suggested change
fd_locks: List = []
fd_locks: List["FdLock"] = []

@FrankLeeeee
Copy link
Collaborator

Thanks!

@fzyzcjy
Copy link
Contributor Author

fzyzcjy commented Oct 13, 2025

you are welcome!

@FrankLeeeee FrankLeeeee merged commit 3d16d1a into sgl-project:main Oct 13, 2025
0 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants