Skip to content

Check for updates concurrently #4388

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

FranciscoTGouveia
Copy link
Contributor

This change is related to #3132.

Currently, the rustup check command verifies updates for each toolchain sequentially.
This patch modifies that behavior to perform the checks concurrently.

To implement this change, the futures_util crate was introduced. I would appreciate feedback on this choice, and whether there might be a preferable alternative for handling concurrency.

My first attempt at adding concurrency caused all output to be printed only after every channel had been checked, which deviated from the current behavior of the command.
To preserve the existing behavior (reporting progress as each toolchain is checked), I tried printing concurrently as well.
However, this led to race conditions, resulting in inconsistent output order.

To address this, I used the indicatif crate (cc @djc) to display progress bars while maintaining the correct output order.

This PR will soon be updated to ensure progress bars are correctly written to stdout in order to pass the tests.

Regarding performance, using hyperfine, a 2.2× speedup was observed compared to the sequential version. These benchmarks were run 100 times (with 5 warmup runs) over a 50 Mbps connection.

@FranciscoTGouveia FranciscoTGouveia force-pushed the check-updates-concurrently branch from b4a3404 to f8156e7 Compare July 3, 2025 14:09
@FranciscoTGouveia FranciscoTGouveia marked this pull request as ready for review July 4, 2025 14:45
@FranciscoTGouveia
Copy link
Contributor Author

After taking a closer look at the code, I realized that buffered can be replaced with buffer_unordered when running in a tty, since indicatif already ensures that output appears in the correct order.

To see if this had a practical impact, I re-ran the benchmarks (using the same settings as before) and observed a 3.1x speedup.

Copy link
Member

@rami3l rami3l left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work overall 🎉 I have to suggest some other changes regarding some small details but modulo those everything seems to be going pretty well. Looking forward to landing this patch!

let mut update_available = false;
let channels = cfg.list_channels()?;
let num_channels = channels.len();
if num_channels > 0 {
let mp = MultiProgress::with_draw_target(ProgressDrawTarget::stdout());

let mp = if is_a_tty {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bonus: cargo uses the CARGO_TERM_PROGRESS_WHEN environment variable to control the drawing of the progress bar. I think it'd be interesting to mirror that here as well, maybe in a separate commit or PR.

@@ -223,6 +232,81 @@ impl io::Write for ColorableTerminalLocked {
}
}

impl TermLike for ColorableTerminal {
fn width(&self) -> u16 {
Term::stdout().size().1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bonus: cargo uses the CARGO_TERM_PROGRESS_WIDTH environment variable to control the terminal width being detected. I think it'd be interesting to mirror that here as well, maybe in a separate commit or PR.

@FranciscoTGouveia FranciscoTGouveia force-pushed the check-updates-concurrently branch 2 times, most recently from db76e5a to 86a7a58 Compare July 6, 2025 13:02
Copy link
Contributor

@djc djc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did you benchmark the performance impact of one or more rustup commands?

Would also be good to add an animation/video of the new output.

})
.collect();

let channels = tokio_stream::iter(channels.into_iter().zip(progress_bars))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are progress_bars and channels created in separate loops? Suggest just using for-loops here instead of a trivial combinator chain with a long map() closure.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@djc We are mapping over a stream rather than an iterator, so doing a plain for loop might not help here. Depending on the current configuration more combinators will be involved down the line, notably with .buffered() and .buffer_unordered().

To later support `indicatif`'s progress bars, this trait was implemented
for the `ColorableTerminal`.

It is important to note that `is_a_tty` and `color_choice` will be
needed for deciding if progress bars are written to the stdout (or
hidden) and if they are styled with colors, respectively.
@FranciscoTGouveia FranciscoTGouveia force-pushed the check-updates-concurrently branch from 86a7a58 to 4df6fc8 Compare July 7, 2025 19:29
@FranciscoTGouveia
Copy link
Contributor Author

In order to evaluate the performance impact of these changes, I ran a series of benchmarks using hyperfine.
My setup included a 50 Mbps internet connection, 16 GB of RAM, and an i7-1165G7 processor.

I compared my modified version of the rustup check command against the current implementation on the master branch.
Each benchmark was executed 100 times, with 5 warmup runs beforehand.
To ensure a meaningful comparison, I checked 20 different toolchains for updates in each run.

The results showed a 3.3× speedup over the current implementation.
This exceeded my initial expectations, thanks to the realization that, since indicatif already handles the output in the correct order, we could safely replace buffered with buffer_unordered.

To give a better understanding of what this change implies regarding the user experience, I leave below a short animation.
On the left, you’ll see the current (sequential) behavior, and on the right, the new (concurrent) behavior:

showcase

Copy link
Member

@rami3l rami3l left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's much clearer, thanks!

This change implied the introduction of the `futures_util` crate.
Feedback is greately appreciated on the above.

To ensure that there are no hangs, a special "if" was needed to ensure
that we never try running `.buffered(0)`. Otherwise, this would cause an
hang (kudos to rami3l for helping me out with this).

To report progress in a concurrent way whilst maintaining order, the
`indicatf` crate was used.
Note that, if we are not in a *tty* the progress wil be hidden and will
instead be printed after every channels is checked for updates.

NB: since we are using `indicatif` to display progress, when running in
a *tty*, we can use `buffer_unordered` instead of `buffered`, as
`indicatif` already handles the output in the correct order.
A quick benchmark (100 runs + 10 warmpu runs in a 50Mbps connection)
showed that this small change yields a **1.2x** speedup.
@FranciscoTGouveia FranciscoTGouveia force-pushed the check-updates-concurrently branch from 4df6fc8 to ee063c6 Compare July 8, 2025 07:43
@djc
Copy link
Contributor

djc commented Jul 8, 2025

It seems like a substantial downside to me that the new version takes so long to show any output at all. IMO you should look into fixing that.

@rami3l
Copy link
Member

rami3l commented Jul 8, 2025

It seems like a substantial downside to me that the new version takes so long to show any output at all. IMO you should look into fixing that.

@djc I believe you are reading the gif wrong? This is not a side-by-side comparison; these two commands are executed one after another, both timed so that you can see the second is faster.

@FranciscoTGouveia It's true that we would like to eliminate all such confusions in the final demo to be used in the blog post/release notes or whatever it may be.

@FranciscoTGouveia
Copy link
Contributor Author

FranciscoTGouveia commented Jul 8, 2025

I’m sorry, I should have showcased the implementation a bit better.

In reality, the new version displays output instantaneously. The delay you see between when the left side finishes and the right side starts is simply because I only ran the new version after the base version completed. The pause was caused by me switching panes in tmux during the recording.
Below is an updated version of the animation that I hope better illustrates the behavior:

showcase_final

@djc
Copy link
Contributor

djc commented Jul 8, 2025

Ahh, awesome, that's much more impressive!

@rami3l
Copy link
Member

rami3l commented Jul 8, 2025

As @FranciscoTGouveia and I have agreed to address the bonus points (#4388 (comment), #4388 (comment)) in a later commit, I think this PR is ready to go.

Nice work @FranciscoTGouveia !

@rami3l rami3l added this pull request to the merge queue Jul 8, 2025
// Ensure that `.buffered()` is never called with 0 as this will cause a hang.
// See: https://github.com/rust-lang/futures-rs/pull/1194#discussion_r209501774
if num_channels > 0 {
let multi_progress_bars = if is_a_tty {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: extract the MultiProgress::with_draw_target() out of the if (also, I think a match would look better than an if for this).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@djc Oops that has happened after the merge so... @FranciscoTGouveia Maybe you could fix it in a new PR? 🙏

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, of course! Thank you for pointing this out! :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't this get overshadowed by this comment?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potentially! If so, feel free to ignore my comment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realize now that it doesn’t overshadow the previous point, I am sorry.
I have made that change in the follow-up PR.

Merged via the queue into rust-lang:master with commit ed67088 Jul 8, 2025
29 checks passed
@rami3l rami3l added this to the 1.28.3 milestone Jul 12, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants