1. 03 Feb, 2024 1 commit
    • Linoy Tsaban's avatar
      [advanced dreambooth lora sdxl script] new features + bug fixes (#6691) · 65329aed
      Linoy Tsaban authored
      
      
      * add noise_offset param
      
      * micro conditioning - wip
      
      * image processing adjusted and moved to support micro conditioning
      
      * change time ids to be computed inside train loop
      
      * change time ids to be computed inside train loop
      
      * change time ids to be computed inside train loop
      
      * time ids shape fix
      
      * move token replacement of validation prompt to the same section of instance prompt and class prompt
      
      * add offset noise to sd15 advanced script
      
      * fix token loading during validation
      
      * fix token loading during validation in sdxl script
      
      * a little clean
      
      * style
      
      * a little clean
      
      * style
      
      * sdxl script - a little clean + minor path fix
      
      sd 1.5 script - change default resolution value
      
      * ad 1.5 script - minor path fix
      
      * fix missing comma in code example in model card
      
      * clean up commented lines
      
      * style
      
      * remove time ids computed outside training loop - no longer used now that we utilize micro-conditioning, as all time ids are now computed inside the training loop
      
      * style
      
      * [WIP] - added draft readme, building off of examples/dreambooth/README.md
      
      * readme
      
      * readme
      
      * readme
      
      * readme
      
      * readme
      
      * readme
      
      * readme
      
      * readme
      
      * removed --crops_coords_top_left from CLI args
      
      * style
      
      * fix missing shape bug due to missing RGB if statement
      
      * add blog mention at the start of the reamde as well
      
      * Update examples/advanced_diffusion_training/README.md
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * change note to render nicely as well
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      65329aed
  2. 07 Dec, 2023 1 commit
    • Younes Belkada's avatar
      [`PEFT`] Adapt example scripts to use PEFT (#5388) · c2717317
      Younes Belkada authored
      
      
      * adapt example scripts to use PEFT
      
      * Update examples/text_to_image/train_text_to_image_lora.py
      
      * fix
      
      * add for SDXL
      
      * oops
      
      * make sure to install peft
      
      * fix
      
      * fix
      
      * fix dreambooth and lora
      
      * more fixes
      
      * add peft to requirements.txt
      
      * fix
      
      * final fix
      
      * add peft version in requirements
      
      * remove comment
      
      * change variable names
      
      * add few lines in readme
      
      * add to reqs
      
      * style
      
      * fix issues
      
      * fix lora dreambooth xl tests
      
      * init_lora_weights to gaussian and add out proj where missing
      
      * ammend requirements.
      
      * ammend requirements.txt
      
      * add correct peft versions
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      c2717317
  3. 11 Apr, 2023 1 commit
  4. 20 Jan, 2023 1 commit
  5. 09 Dec, 2022 1 commit
    • Haofan Wang's avatar
      Update requirements.txt (#1623) · f1b726e4
      Haofan Wang authored
      * Update requirements.txt
      
      * Update requirements_flax.txt
      
      * Update requirements.txt
      
      * Update requirements_flax.txt
      
      * Update requirements.txt
      
      * Update requirements_flax.txt
      f1b726e4
  6. 06 Dec, 2022 1 commit
  7. 20 Oct, 2022 1 commit
  8. 29 Sep, 2022 1 commit
  9. 27 Sep, 2022 1 commit
  10. 02 Sep, 2022 1 commit
    • Suraj Patil's avatar
      Textual inversion (#266) · d0d3e24e
      Suraj Patil authored
      * add textual inversion script
      
      * make the loop work
      
      * make coarse_loss optional
      
      * save pipeline after training
      
      * add arg pretrained_model_name_or_path
      
      * fix saving
      
      * fix gradient_accumulation_steps
      
      * style
      
      * fix progress bar steps
      
      * scale lr
      
      * add argument to accept style
      
      * remove unused args
      
      * scale lr using num gpus
      
      * load tokenizer using args
      
      * add checks when converting init token to id
      
      * improve commnets and style
      
      * document args
      
      * more cleanup
      
      * fix default adamw arsg
      
      * TextualInversionWrapper -> CLIPTextualInversionWrapper
      
      * fix tokenizer loading
      
      * Use the CLIPTextModel instead of wrapper
      
      * clean dataset
      
      * remove commented code
      
      * fix accessing grads for multi-gpu
      
      * more cleanup
      
      * fix saving on multi-GPU
      
      * init_placeholder_token_embeds
      
      * add seed
      
      * fix flip
      
      * fix multi-gpu
      
      * add utility methods in wrapper
      
      * remove ipynb
      
      * don't use wrapper
      
      * dont pass vae an dunet to accelerate prepare
      
      * bring back accelerator.accumulate
      
      * scale latents
      
      * use only one progress bar for steps
      
      * push_to_hub at the end of training
      
      * remove unused args
      
      * log some important stats
      
      * store args in tensorboard
      
      * pretty comments
      
      * save the trained embeddings
      
      * mobe the script up
      
      * add requirements file
      
      * more cleanup
      
      * fux typo
      
      * begin readme
      
      * style -> learnable_property
      
      * keep vae and unet in eval mode
      
      * address review comments
      
      * address more comments
      
      * removed unused args
      
      * add train command in readme
      
      * update readme
      d0d3e24e
  11. 30 Aug, 2022 1 commit