[train_lcm_distill_lora_sdxl.py] Fix the LR schedulers when num_train_epochs...
[train_lcm_distill_lora_sdxl.py] Fix the LR schedulers when num_train_epochs is passed in a distributed training env (#8446)
fix num_train_epochs
Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
Showing
Please register or sign in to comment