1. 18 Nov, 2022 1 commit
  2. 31 Oct, 2022 1 commit
  3. 26 Oct, 2022 1 commit
  4. 12 Oct, 2022 2 commits
  5. 11 Oct, 2022 1 commit
    • Suraj Patil's avatar
      stable diffusion fine-tuning (#356) · 66a5279a
      Suraj Patil authored
      
      
      * begin text2image script
      
      * loading the datasets, preprocessing & transforms
      
      * handle input features correctly
      
      * add gradient checkpointing support
      
      * fix output names
      
      * run unet in train mode not text encoder
      
      * use no_grad instead of freezing params
      
      * default max steps None
      
      * pad to longest
      
      * don't pad when tokenizing
      
      * fix encode on multi gpu
      
      * fix stupid bug
      
      * add random flip
      
      * add ema
      
      * fix ema
      
      * put ema on cpu
      
      * improve EMA model
      
      * contiguous_format
      
      * don't warp vae and text encode in accelerate
      
      * remove no_grad
      
      * use randn_like
      
      * fix resize
      
      * improve few things
      
      * log epoch loss
      
      * set log level
      
      * don't log each step
      
      * remove max_length from collate
      
      * style
      
      * add report_to option
      
      * make scale_lr false by default
      
      * add grad clipping
      
      * add an option to use 8bit adam
      
      * fix logging in multi-gpu, log every step
      
      * more comments
      
      * remove eval for now
      
      * adress review comments
      
      * add requirements file
      
      * begin readme
      
      * begin readme
      
      * fix typo
      
      * fix push to hub
      
      * populate readme
      
      * update readme
      
      * remove use_auth_token from the script
      
      * address some review comments
      
      * better mixed precision support
      
      * remove redundant to
      
      * create ema model early
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * better description for train_data_dir
      
      * add diffusers in requirements
      
      * update dataset_name_mapping
      
      * update readme
      
      * add inference example
      Co-authored-by: default avataranton-l <anton@huggingface.co>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      66a5279a
  6. 10 Oct, 2022 1 commit
  7. 07 Oct, 2022 1 commit
  8. 05 Oct, 2022 3 commits
  9. 04 Oct, 2022 1 commit
  10. 03 Oct, 2022 1 commit
  11. 27 Sep, 2022 2 commits
    • Suraj Patil's avatar
      [examples/dreambooth] don't pass tensor_format to scheduler. (#649) · ac665b64
      Suraj Patil authored
      don't pass tensor_format
      ac665b64
    • Zhenhuan Liu's avatar
      Add training example for DreamBooth. (#554) · 3b747de8
      Zhenhuan Liu authored
      
      
      * Add training example for DreamBooth.
      
      * Fix bugs.
      
      * Update readme and default hyperparameters.
      
      * Reformatting code with black.
      
      * Update for multi-gpu trianing.
      
      * Apply suggestions from code review
      
      * improgve sampling
      
      * fix autocast
      
      * improve sampling more
      
      * fix saving
      
      * actuallu fix saving
      
      * fix saving
      
      * improve dataset
      
      * fix collate fun
      
      * fix collate_fn
      
      * fix collate fn
      
      * fix key name
      
      * fix dataset
      
      * fix collate fn
      
      * concat batch in collate fn
      
      * add grad ckpt
      
      * add option for 8bit adam
      
      * do two forward passes for prior preservation
      
      * Revert "do two forward passes for prior preservation"
      
      This reverts commit 661ca4677e6dccc4ad596c2ee6ca4baad4159e95.
      
      * add option for prior_loss_weight
      
      * add option for clip grad norm
      
      * add more comments
      
      * update readme
      
      * update readme
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      
      * add docstr for dataset
      
      * update the saving logic
      
      * Update examples/dreambooth/README.md
      
      * remove unused imports
      Co-authored-by: default avatarSuraj Patil <surajp815@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      3b747de8
  12. 16 Sep, 2022 1 commit
  13. 15 Sep, 2022 1 commit
    • Kashif Rasul's avatar
      Karras VE, DDIM and DDPM flax schedulers (#508) · b34be039
      Kashif Rasul authored
      * beta never changes removed from state
      
      * fix typos in docs
      
      * removed unused var
      
      * initial ddim flax scheduler
      
      * import
      
      * added dummy objects
      
      * fix style
      
      * fix typo
      
      * docs
      
      * fix typo in comment
      
      * set return type
      
      * added flax ddom
      
      * fix style
      
      * remake
      
      * pass PRNG key as argument and split before use
      
      * fix doc string
      
      * use config
      
      * added flax Karras VE scheduler
      
      * make style
      
      * fix dummy
      
      * fix ndarray type annotation
      
      * replace returns a new state
      
      * added lms_discrete scheduler
      
      * use self.config
      
      * add_noise needs state
      
      * use config
      
      * use config
      
      * docstring
      
      * added flax score sde ve
      
      * fix imports
      
      * fix typos
      b34be039
  14. 08 Sep, 2022 1 commit
  15. 07 Sep, 2022 1 commit
  16. 05 Sep, 2022 2 commits
  17. 02 Sep, 2022 1 commit
    • Suraj Patil's avatar
      Textual inversion (#266) · d0d3e24e
      Suraj Patil authored
      * add textual inversion script
      
      * make the loop work
      
      * make coarse_loss optional
      
      * save pipeline after training
      
      * add arg pretrained_model_name_or_path
      
      * fix saving
      
      * fix gradient_accumulation_steps
      
      * style
      
      * fix progress bar steps
      
      * scale lr
      
      * add argument to accept style
      
      * remove unused args
      
      * scale lr using num gpus
      
      * load tokenizer using args
      
      * add checks when converting init token to id
      
      * improve commnets and style
      
      * document args
      
      * more cleanup
      
      * fix default adamw arsg
      
      * TextualInversionWrapper -> CLIPTextualInversionWrapper
      
      * fix tokenizer loading
      
      * Use the CLIPTextModel instead of wrapper
      
      * clean dataset
      
      * remove commented code
      
      * fix accessing grads for multi-gpu
      
      * more cleanup
      
      * fix saving on multi-GPU
      
      * init_placeholder_token_embeds
      
      * add seed
      
      * fix flip
      
      * fix multi-gpu
      
      * add utility methods in wrapper
      
      * remove ipynb
      
      * don't use wrapper
      
      * dont pass vae an dunet to accelerate prepare
      
      * bring back accelerator.accumulate
      
      * scale latents
      
      * use only one progress bar for steps
      
      * push_to_hub at the end of training
      
      * remove unused args
      
      * log some important stats
      
      * store args in tensorboard
      
      * pretty comments
      
      * save the trained embeddings
      
      * mobe the script up
      
      * add requirements file
      
      * more cleanup
      
      * fux typo
      
      * begin readme
      
      * style -> learnable_property
      
      * keep vae and unet in eval mode
      
      * address review comments
      
      * address more comments
      
      * removed unused args
      
      * add train command in readme
      
      * update readme
      d0d3e24e