-
-
Notifications
You must be signed in to change notification settings - Fork 411
Fix bugs and improve performance of ProxyStream
#5703
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Fix bugs and improve performance of ProxyStream
#5703
Conversation
…r-performance-regression
|
Created #5710 |
@arturaz could you run a final end-to-end test on |
Also please check if the slowdown that @alexarchambault reported in |
…erformance-regression-minimized
…r-performance-regression
…r-performance-regression
Without the fix: |
Looks good then! |
I assume #5710 is the one we should merge first right |
…r-performance-regression-minimized # Conflicts: # integration/feature/startup-shutdown/src/StartupShutdownTests.scala # libs/daemon/server/src/mill/server/Server.scala
Yes, but I'm still running tests for Netty. |
Without JVM id: |
…into fix/5693-zinc-worker-performance-regression # Conflicts: # libs/daemon/client/src/mill/client/ServerLauncher.java # libs/daemon/server/src/mill/server/ProxyStreamServer.scala # libs/daemon/server/src/mill/server/Server.scala # libs/javalib/worker/src/mill/javalib/zinc/ZincWorkerMain.scala # libs/javalib/worker/src/mill/javalib/zinc/ZincWorkerRpcServer.scala
https://github.com/arturaz/mill into fix/5693-zinc-worker-performance-regression-minimized
…into fix/5693-zinc-worker-performance-regression
PR comments for #5710 and this one look pretty much like copy-and-paste. Can they made more significant before merging? |
With: def shout() = Task.Command {
println("x" * (10 * 1024 * 1024))
} old:
new:
AI overview:
In summary, while the overall "real time" improvement is modest, the underlying efficiency gains are massive. The new version is significantly better optimized, requiring much less CPU time for both its own code ( Lower The reason Here's a breakdown of what these timings mean and why this happens:
Why
|
Please don't paste me AI overviews, if I wanted to hear what Gemini has to say about the results I can ask it myself. |
…r-performance-regression # Conflicts: # libs/daemon/server/src/mill/server/ProxyStreamServer.scala
ProxyStream
ProxyStream
ProxyStream
Noticed while working on #5710.
ProxyStream
protocol uses a very small (126 byte) buffer, so there's a lot of byte shuffling going around, especially with the user <-> kernel space crossing.ProxyStream
protocol was changed to allow chunk sizes bigger than 126 (up to Int.MaxValue). Additionally,ProxyStream
truncated exit codes to a byte, this has been fixed.As a part of debugging effort
ProxyStream
was refactored to be more readable.This has been tested manually. With:
main
branch takes:This branch takes: