- 07 Jan, 2025 1 commit
-
-
Rahul Raman authored
* make base code changes referred from train_instructpix2pix script in examples * change code to use PEFT as discussed in issue 10062 * update README training command * update README training command * refactor variable name and freezing unet * Update examples/research_projects/instructpix2pix_lora/train_instruct_pix2pix_lora.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * update README installation instructions. * cleanup code using make style and quality --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 24 Jun, 2024 2 commits
-
-
Tolga Cangöz authored
* Fix typos * Fix typos & up style * chore: Update numbers --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Tolga Cangöz authored
* Trim all the trailing white space in the whole repo * Remove unnecessary empty places * make style && make quality * Trim trailing white space * trim --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 02 Apr, 2024 1 commit
-
-
Bagheera authored
* 7529 do not disable autocast for cuda devices * Remove typecasting error check for non-mps platforms, as a correct autocast implementation makes it a non-issue * add autocast fix to other training examples * disable native_amp for dreambooth (sdxl) * disable native_amp for pix2pix (sdxl) * remove tests from remaining files * disable native_amp on huggingface accelerator for every training example that uses it * convert more usages of autocast to nullcontext, make style fixes * make style fixes * style. * Empty-Commit --------- Co-authored-by:
bghira <bghira@users.github.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 13 Mar, 2024 1 commit
-
-
Sayak Paul authored
switch to logger.warning
-
- 09 Feb, 2024 1 commit
-
-
Sayak Paul authored
-
- 08 Feb, 2024 1 commit
-
-
Sayak Paul authored
change to 2024
-
- 10 Jan, 2024 1 commit
-
-
Rahul Raman authored
* base template file - train_instruct_pix2pix.py * additional import and parser argument requried for lora * finetune only instructpix2pix model -- no need to include these layers * inject lora layers * freeze unet model -- only lora layers are trained * training modifications to train only lora parameters * store only lora parameters * move train script to research project * run quality and style code checks * move train script to a new folder * add README * update README * update references in README --------- Co-authored-by:
Rahul Raman <rahulraman@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-