1. 06 Oct, 2022 4 commits
  2. 05 Oct, 2022 4 commits
  3. 04 Oct, 2022 2 commits
  4. 03 Oct, 2022 2 commits
  5. 29 Sep, 2022 1 commit
  6. 28 Sep, 2022 2 commits
  7. 27 Sep, 2022 6 commits
    • Suraj Patil's avatar
      [CLIPGuidedStableDiffusion] remove set_format from pipeline (#653) · c0c98df9
      Suraj Patil authored
      remove set_format from pipeline
      c0c98df9
    • Suraj Patil's avatar
      [dreambooth] update install section (#650) · e5eed523
      Suraj Patil authored
      update install section
      e5eed523
    • Suraj Patil's avatar
      [examples/dreambooth] don't pass tensor_format to scheduler. (#649) · ac665b64
      Suraj Patil authored
      don't pass tensor_format
      ac665b64
    • Kashif Rasul's avatar
      [Pytorch] Pytorch only schedulers (#534) · bd8df2da
      Kashif Rasul authored
      
      
      * pytorch only schedulers
      
      * fix style
      
      * remove match_shape
      
      * pytorch only ddpm
      
      * remove SchedulerMixin
      
      * remove numpy from karras_ve
      
      * fix types
      
      * remove numpy from lms_discrete
      
      * remove numpy from pndm
      
      * fix typo
      
      * remove mixin and numpy from sde_vp and ve
      
      * remove remaining tensor_format
      
      * fix style
      
      * sigmas has to be torch tensor
      
      * removed set_format in readme
      
      * remove set format from docs
      
      * remove set_format from pipelines
      
      * update tests
      
      * fix typo
      
      * continue to use mixin
      
      * fix imports
      
      * removed unsed imports
      
      * match shape instead of assuming image shapes
      
      * remove import typo
      
      * update call to add_noise
      
      * use math instead of numpy
      
      * fix t_index
      
      * removed commented out numpy tests
      
      * timesteps needs to be discrete
      
      * cast timesteps to int in flax scheduler too
      
      * fix device mismatch issue
      
      * small fix
      
      * Update src/diffusers/schedulers/scheduling_pndm.py
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      bd8df2da
    • Zhenhuan Liu's avatar
      Add training example for DreamBooth. (#554) · 3b747de8
      Zhenhuan Liu authored
      
      
      * Add training example for DreamBooth.
      
      * Fix bugs.
      
      * Update readme and default hyperparameters.
      
      * Reformatting code with black.
      
      * Update for multi-gpu trianing.
      
      * Apply suggestions from code review
      
      * improgve sampling
      
      * fix autocast
      
      * improve sampling more
      
      * fix saving
      
      * actuallu fix saving
      
      * fix saving
      
      * improve dataset
      
      * fix collate fun
      
      * fix collate_fn
      
      * fix collate fn
      
      * fix key name
      
      * fix dataset
      
      * fix collate fn
      
      * concat batch in collate fn
      
      * add grad ckpt
      
      * add option for 8bit adam
      
      * do two forward passes for prior preservation
      
      * Revert "do two forward passes for prior preservation"
      
      This reverts commit 661ca4677e6dccc4ad596c2ee6ca4baad4159e95.
      
      * add option for prior_loss_weight
      
      * add option for clip grad norm
      
      * add more comments
      
      * update readme
      
      * update readme
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * add docstr for dataset
      
      * update the saving logic
      
      * Update examples/dreambooth/README.md
      
      * remove unused imports
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      3b747de8
    • Abdullah Alfaraj's avatar
      Fix docs link to train_unconditional.py (#642) · bb0c5d15
      Abdullah Alfaraj authored
      the link points to an old location of the train_unconditional.py file
      bb0c5d15
  8. 21 Sep, 2022 1 commit
  9. 19 Sep, 2022 2 commits
  10. 16 Sep, 2022 1 commit
  11. 15 Sep, 2022 1 commit
    • Kashif Rasul's avatar
      Karras VE, DDIM and DDPM flax schedulers (#508) · b34be039
      Kashif Rasul authored
      * beta never changes removed from state
      
      * fix typos in docs
      
      * removed unused var
      
      * initial ddim flax scheduler
      
      * import
      
      * added dummy objects
      
      * fix style
      
      * fix typo
      
      * docs
      
      * fix typo in comment
      
      * set return type
      
      * added flax ddom
      
      * fix style
      
      * remake
      
      * pass PRNG key as argument and split before use
      
      * fix doc string
      
      * use config
      
      * added flax Karras VE scheduler
      
      * make style
      
      * fix dummy
      
      * fix ndarray type annotation
      
      * replace returns a new state
      
      * added lms_discrete scheduler
      
      * use self.config
      
      * add_noise needs state
      
      * use config
      
      * use config
      
      * docstring
      
      * added flax score sde ve
      
      * fix imports
      
      * fix typos
      b34be039
  12. 08 Sep, 2022 1 commit
  13. 07 Sep, 2022 2 commits
  14. 06 Sep, 2022 1 commit
  15. 05 Sep, 2022 2 commits
  16. 02 Sep, 2022 2 commits
    • Suraj Patil's avatar
      Update README.md · 30e7c78a
      Suraj Patil authored
      30e7c78a
    • Suraj Patil's avatar
      Textual inversion (#266) · d0d3e24e
      Suraj Patil authored
      * add textual inversion script
      
      * make the loop work
      
      * make coarse_loss optional
      
      * save pipeline after training
      
      * add arg pretrained_model_name_or_path
      
      * fix saving
      
      * fix gradient_accumulation_steps
      
      * style
      
      * fix progress bar steps
      
      * scale lr
      
      * add argument to accept style
      
      * remove unused args
      
      * scale lr using num gpus
      
      * load tokenizer using args
      
      * add checks when converting init token to id
      
      * improve commnets and style
      
      * document args
      
      * more cleanup
      
      * fix default adamw arsg
      
      * TextualInversionWrapper -> CLIPTextualInversionWrapper
      
      * fix tokenizer loading
      
      * Use the CLIPTextModel instead of wrapper
      
      * clean dataset
      
      * remove commented code
      
      * fix accessing grads for multi-gpu
      
      * more cleanup
      
      * fix saving on multi-GPU
      
      * init_placeholder_token_embeds
      
      * add seed
      
      * fix flip
      
      * fix multi-gpu
      
      * add utility methods in wrapper
      
      * remove ipynb
      
      * don't use wrapper
      
      * dont pass vae an dunet to accelerate prepare
      
      * bring back accelerator.accumulate
      
      * scale latents
      
      * use only one progress bar for steps
      
      * push_to_hub at the end of training
      
      * remove unused args
      
      * log some important stats
      
      * store args in tensorboard
      
      * pretty comments
      
      * save the trained embeddings
      
      * mobe the script up
      
      * add requirements file
      
      * more cleanup
      
      * fux typo
      
      * begin readme
      
      * style -> learnable_property
      
      * keep vae and unet in eval mode
      
      * address review comments
      
      * address more comments
      
      * removed unused args
      
      * add train command in readme
      
      * update readme
      d0d3e24e
  17. 01 Sep, 2022 1 commit
  18. 30 Aug, 2022 1 commit
  19. 29 Aug, 2022 2 commits
  20. 27 Aug, 2022 1 commit
  21. 26 Aug, 2022 1 commit