Skip to content

Conversation

larryliu0820
Copy link
Contributor

@larryliu0820 larryliu0820 commented Oct 10, 2025

This pull request introduces changes to the CUDA workflow, model artifact handling, and multimodal runner logic. The main changes include restructuring the GitHub Actions workflow to separate model export, benchmarking, and end-to-end testing for the Voxtral CUDA pipeline, improving artifact management and reproducibility. Additionally, the multimodal runner now supports automatic conversion of audio tensors to bfloat16, ensuring compatibility with expected input types. There are also enhancements to caching and symbol registration in the CUDA backend, and build system updates to support linking the CUDA backend.

Workflow and Artifact Management Improvements:

  • Refactored .github/workflows/cuda.yml to split the Voxtral CUDA pipeline into three jobs: export-voxtral-cuda-artifact (exports and stores model artifacts), benchmark-voxtral-cuda (benchmarks using exported artifacts), and test-voxtral-cuda-e2e (runs full end-to-end tests with artifact download and audio input). Improved artifact handling, reproducibility, and added explicit checks for required files. [1] [2] [3] [4] [5]

Multimodal Runner Logic:

  • Added automatic conversion of audio tensors to bfloat16 in MultimodalPrefiller::prefill and implemented a helper function convert_to_bfloat16 in util.h to support this. This ensures that audio inputs match the expected dtype for the encoder, improving robustness for multimodal inference. [1] [2]

CUDA Backend and Caching Enhancements:

  • Improved caching logic in common_shims.cpp for tensor strides and sizes by validating cached values and updating them when necessary. This prevents stale cache issues and ensures correct tensor metadata. [1] [2]
  • Added dynamic symbol re-registration in CudaBackend to handle multiple shared objects in the same process, ensuring correct execution when switching between models.
  • Removed redundant logging statements in CUDA backend for cleaner output. [1] [2]

Build System Updates:

  • Updated CMakeLists.txt and executorch-config.cmake to include and link the CUDA backend (aoti_cuda) when building Voxtral and other components, improving build flexibility and CUDA support. [1] [2]

Debugging and Tuning Options:

  • Added support for enabling debug compilation in cuda_backend.py via the DEBUG environment variable, allowing easier troubleshooting and development.

Copy link

pytorch-bot bot commented Oct 10, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14980

Note: Links to docs will display an error until the docs builds have been completed.

❌ 4 New Failures, 17 Pending

As of commit be5d187 with merge base 66c3dea (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 10, 2025
@larryliu0820 larryliu0820 added release notes: multimodal Changes and new features for multimodal support release notes: desktop for desktop/laptop workstream labels Oct 10, 2025
@larryliu0820 larryliu0820 marked this pull request as ready for review October 10, 2025 04:44
Copy link
Contributor

@Gasoonjia Gasoonjia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thansk for your great work!
The size/stride change for me is pretty strange: i con't image a case that the tensor ptr keeps the same while its size/stride got changed

Comment on lines +53 to +60
std::vector<int64_t> strides(tensor->dim());
auto tensor_strides = tensor->strides();
for (ssize_t i = 0; i < tensor->dim(); i++) {
strides[i] = static_cast<int64_t>(tensor_strides[i]);
}
auto it =
internal::tensor_to_strides.insert_or_assign(tensor, std::move(strides))
.first;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why are we now allocating a vector unconditionally? this seems less efficient than the old code.

Copy link
Contributor

@Gasoonjia Gasoonjia Oct 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i just believe the original logic is too complex.
We can also do inplace update: if there's tensor in the map, we can reuse the vector instead of creating a new one.
this will have same order of memory consumption, while make the code logic cleaner

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the branch on whether the tensor is already present in the map before creating and filling out a new vector is very important; it's the difference between doing a heap allocation once and doing it every time.

sizes[i] = tensor_sizes[i];
}
it = internal::tensor_to_sizes.emplace(tensor, std::move(sizes)).first;
std::vector<int64_t> sizes(tensor->dim());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

Comment on lines +164 to +170
// bfloat16 is the upper 16 bits of float32
uint32_t float_bits;
std::memcpy(&float_bits, &float_data[i], sizeof(float));

// Rounding: add 0x7FFF to round to nearest even
uint32_t rounding_bias = 0x7FFF + ((float_bits >> 16) & 1);
bf16_data[i] = static_cast<uint16_t>((float_bits + rounding_bias) >> 16);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why can't we use ExecuTorch's BFloat16 class (which is c10::BFloat16 underneath) for this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. release notes: desktop for desktop/laptop workstream release notes: multimodal Changes and new features for multimodal support
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants