1. 15 Mar, 2023 1 commit
  2. 25 Jan, 2023 1 commit
  3. 23 Jan, 2023 1 commit
  4. 20 Jan, 2023 1 commit
  5. 09 Dec, 2022 1 commit
    • Haofan Wang's avatar
      Update requirements.txt (#1623) · f1b726e4
      Haofan Wang authored
      * Update requirements.txt
      
      * Update requirements_flax.txt
      
      * Update requirements.txt
      
      * Update requirements_flax.txt
      
      * Update requirements.txt
      
      * Update requirements_flax.txt
      f1b726e4
  6. 06 Dec, 2022 1 commit
  7. 11 Oct, 2022 1 commit
    • Suraj Patil's avatar
      stable diffusion fine-tuning (#356) · 66a5279a
      Suraj Patil authored
      
      
      * begin text2image script
      
      * loading the datasets, preprocessing & transforms
      
      * handle input features correctly
      
      * add gradient checkpointing support
      
      * fix output names
      
      * run unet in train mode not text encoder
      
      * use no_grad instead of freezing params
      
      * default max steps None
      
      * pad to longest
      
      * don't pad when tokenizing
      
      * fix encode on multi gpu
      
      * fix stupid bug
      
      * add random flip
      
      * add ema
      
      * fix ema
      
      * put ema on cpu
      
      * improve EMA model
      
      * contiguous_format
      
      * don't warp vae and text encode in accelerate
      
      * remove no_grad
      
      * use randn_like
      
      * fix resize
      
      * improve few things
      
      * log epoch loss
      
      * set log level
      
      * don't log each step
      
      * remove max_length from collate
      
      * style
      
      * add report_to option
      
      * make scale_lr false by default
      
      * add grad clipping
      
      * add an option to use 8bit adam
      
      * fix logging in multi-gpu, log every step
      
      * more comments
      
      * remove eval for now
      
      * adress review comments
      
      * add requirements file
      
      * begin readme
      
      * begin readme
      
      * fix typo
      
      * fix push to hub
      
      * populate readme
      
      * update readme
      
      * remove use_auth_token from the script
      
      * address some review comments
      
      * better mixed precision support
      
      * remove redundant to
      
      * create ema model early
      
      * Apply suggestions from code review
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * better description for train_data_dir
      
      * add diffusers in requirements
      
      * update dataset_name_mapping
      
      * update readme
      
      * add inference example
      Co-authored-by: default avataranton-l <anton@huggingface.co>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      66a5279a
  8. 29 Sep, 2022 1 commit
  9. 27 Sep, 2022 1 commit
  10. 02 Sep, 2022 1 commit
    • Suraj Patil's avatar
      Textual inversion (#266) · d0d3e24e
      Suraj Patil authored
      * add textual inversion script
      
      * make the loop work
      
      * make coarse_loss optional
      
      * save pipeline after training
      
      * add arg pretrained_model_name_or_path
      
      * fix saving
      
      * fix gradient_accumulation_steps
      
      * style
      
      * fix progress bar steps
      
      * scale lr
      
      * add argument to accept style
      
      * remove unused args
      
      * scale lr using num gpus
      
      * load tokenizer using args
      
      * add checks when converting init token to id
      
      * improve commnets and style
      
      * document args
      
      * more cleanup
      
      * fix default adamw arsg
      
      * TextualInversionWrapper -> CLIPTextualInversionWrapper
      
      * fix tokenizer loading
      
      * Use the CLIPTextModel instead of wrapper
      
      * clean dataset
      
      * remove commented code
      
      * fix accessing grads for multi-gpu
      
      * more cleanup
      
      * fix saving on multi-GPU
      
      * init_placeholder_token_embeds
      
      * add seed
      
      * fix flip
      
      * fix multi-gpu
      
      * add utility methods in wrapper
      
      * remove ipynb
      
      * don't use wrapper
      
      * dont pass vae an dunet to accelerate prepare
      
      * bring back accelerator.accumulate
      
      * scale latents
      
      * use only one progress bar for steps
      
      * push_to_hub at the end of training
      
      * remove unused args
      
      * log some important stats
      
      * store args in tensorboard
      
      * pretty comments
      
      * save the trained embeddings
      
      * mobe the script up
      
      * add requirements file
      
      * more cleanup
      
      * fux typo
      
      * begin readme
      
      * style -> learnable_property
      
      * keep vae and unet in eval mode
      
      * address review comments
      
      * address more comments
      
      * removed unused args
      
      * add train command in readme
      
      * update readme
      d0d3e24e
  11. 30 Aug, 2022 1 commit