• Linoy Tsaban's avatar
    [Flux LoRA] fix issues in flux lora scripts (#11111) · 71f34fc5
    Linoy Tsaban authored
    
    
    * remove custom scheduler
    
    * update requirements.txt
    
    * log_validation with mixed precision
    
    * add intermediate embeddings saving when checkpointing is enabled
    
    * remove comment
    
    * fix validation
    
    * add unwrap_model for accelerator, torch.no_grad context for validation, fix accelerator.accumulate call in advanced script
    
    * revert unwrap_model change temp
    
    * add .module to address distributed training bug + replace accelerator.unwrap_model with unwrap model
    
    * changes to align advanced script with canonical script
    
    * make changes for distributed training + unify unwrap_model calls in advanced script
    
    * add module.dtype fix to dreambooth script
    
    * unify unwrap_model calls in dreambooth script
    
    * fix condition in validation run
    
    * mixed precision
    
    * Update examples/advanced_diffusion_training/train_dreambooth_lora_flux_advanced.py
    Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
    
    * smol style change
    
    * change autocast
    
    * Apply style fixes
    
    ---------
    Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
    Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
    71f34fc5
This project manages its dependencies using pip. Learn more
requirements.txt 102 Bytes