Replies: 1 comment
-
@zhyncs can you help me? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I currently have 4 nodes with 32 ranks in total. I want to perform TP (Tensor Parallelism) within a single node and DP (Data Parallelism) across multiple nodes. However, I found that the tp_size can only be set to 32, right? This means all the ranks are used for TP parallelism.
Beta Was this translation helpful? Give feedback.
All reactions