-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Add Entropy Control to GRPOTrainer #3628
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Note that there is a parallel PR (#3563) working on entropy based filtering, we're going to need to sync these |
Tgt ent control
@LeonEricsson @qgallouedec could you please help review the latest changes? Thanks |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the work. A few comments on my end
Thanks for your comments. Resolved |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
few more comments
trl/trainer/grpo_trainer.py
Outdated
@@ -676,6 +720,18 @@ def __init__( | |||
raise NotImplementedError( | |||
"Liger Kernels don't currently support masking token positions based on entropy." | |||
) | |||
# Entropy loss weight | |||
self.ent_coef = max(args.ent_coef, 0.0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can allow the user to set a negative weight if they choose to. I don’t see a specific use case for it, but I don't see the harm in allowing it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if the user set a negative weight for it. Should we directly multiply the negative weight to the entropy loss?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes
Co-authored-by: LeonEricsson <[email protected]>
Co-authored-by: LeonEricsson <[email protected]>
Co-authored-by: LeonEricsson <[email protected]>
Co-authored-by: LeonEricsson <[email protected]>
Co-authored-by: LeonEricsson <[email protected]>
Co-authored-by: LeonEricsson <[email protected]>
Co-authored-by: LeonEricsson <[email protected]>
Co-authored-by: LeonEricsson <[email protected]>
Static coefficient of the entropy regularization term in the loss. | ||
A positive coefficient adds an entropy bonus to encourage exploration. | ||
It is also used as the initial entropy coefficient when using adaptive entropy control. | ||
use_adapt_entropy (`bool`, *optional*, defaults to `False`): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry for pettiness but can can we do
use_adapt_entropy (`bool`, *optional*, defaults to `False`): | |
use_adaptive_entropy (`bool`, *optional*, defaults to `False`): |
self.use_adapt_ent = use_adapt_ent | ||
self.ent_coef = ent_coef | ||
self.min_ent_coef = min_ent_coef | ||
self.max_ent_coef = max_ent_coef | ||
self.delta_ent_coef = delta_ent_coef | ||
self.target_ent = target_ent |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change these everywhere the same way we did in config, e.g entropy_coef_min
. Also use entropy
instead of ent
throughout.
While reviewing the updated entropy controller I noted the following issues, which I should of realized sooner, apologies for that.
I suggest moving ownership of the entropy coefficient parameter to GRPOTrainer, making the entropy controller a pure strategy object which holds logic to step the entropy coefficient (rename
|
Yes, I also think it might be better to use a global scheduler to schedule the update of entropy coef based on global entropy loss gathered from all ranks. I took a look at the original code of skywork and think that it might be using a per rank scheduler to control entropy coef. If you have time, could you please help confirm it? The entropy loss apply entropy coef: https://github.com/SkyworkAI/Skywork-OR1/blob/64e96afa213ae89d0ad21932106d3b8aafe9ace2/verl/workers/actor/dp_actor.py#L234 The entropy controller defined inside trainer |
@qgallouedec would appreciate your thoughts on dealing with the stateful entropy coefficient. To recap, Adaptive Entropy Control maintains the entropy coefficient
|
What does this PR do?
Fixes # (3320)
#3320
The initial step is to support static entropy control
Next step is to support adaptive entropy control
Before submitting
Pull Request section?
to it if that's the case.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.