Unverified Commit 0f94e3e1 authored by Anton Vlasjuk's avatar Anton Vlasjuk Committed by GitHub
Browse files

Fix accelerate kwargs for versions <0.28.0 (#30086)

* fix learning rate display issue in galore optimizer

* fix kwarg in accelerate when using versions < 0.28.0

* this was supposed to be in the other PR whoops
parent 505854f7
......@@ -4374,8 +4374,9 @@ class Trainer:
even_batches=accelerator_config.pop("even_batches"),
use_seedable_sampler=accelerator_config.pop("use_seedable_sampler"),
)
# this would have been updated above, no need for it anymore
accelerator_config.pop("gradient_accumulation_kwargs")
# this would have been updated above, no need for it anymore
accelerator_config.pop("gradient_accumulation_kwargs")
args = {
"deepspeed_plugin": self.args.deepspeed_plugin,
"gradient_accumulation_plugin": gradient_accumulation_plugin,
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment