- 10 Nov, 2023 1 commit
-
-
aihao authored
-
- 09 Nov, 2023 1 commit
-
-
Suraj Patil authored
* add lcm scripts * Co-authored-by: dgu8957@gmail.com
-
- 06 Nov, 2023 1 commit
-
-
Sayak Paul authored
post release
-
- 01 Nov, 2023 1 commit
-
-
Patrick von Platen authored
Revert "Fix the order of width and height of original size in SDXL training script (#5382)" This reverts commit 45db0499.
-
- 31 Oct, 2023 3 commits
-
-
M. Tolga Cangöz authored
* Add Copyright info * Fix typos, improve, update * Update deepfloyd_if.md * Update ldm3d_diffusion.md * Update opt_overview.md
-
YiYi Xu authored
fix Co-authored-by:yiyixuxu <yixu@Yis-MacBook-Pro.local>
-
Jincheng Miao authored
-
- 30 Oct, 2023 1 commit
-
-
Thuan H. Nguyen authored
* Add realfill * Move realfill folder * Fix some format issues
-
- 27 Oct, 2023 1 commit
-
-
jiaqiw09 authored
* fix error reported 'find_unused_parameters' running in mutiple GPUs or NPUs * fix code check of importing module by its alphabetic order --------- Co-authored-by:
jiaqiw <wangjiaqi50@huawei.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 26 Oct, 2023 1 commit
-
-
nickkolok authored
-
- 25 Oct, 2023 3 commits
-
-
Ran Ran authored
* Add from_pt flag to enable model from PT * Format the file * Reformat the file
-
Patrick von Platen authored
-
Logan authored
* Add a new community pipeline examples/community/latent_consistency_img2img.py which can be called like this import torch from diffusers import DiffusionPipeline pipe = DiffusionPipeline.from_pretrained( "SimianLuo/LCM_Dreamshaper_v7", custom_pipeline="latent_consistency_txt2img", custom_revision="main") # To save GPU memory, torch.float16 can be used, but it may compromise image quality. pipe.to(torch_device="cuda", torch_dtype=torch.float32) img2img=LatentConsistencyModelPipeline_img2img( vae=pipe.vae, text_encoder=pipe.text_encoder, tokenizer=pipe.tokenizer, unet=pipe.unet, #scheduler=pipe.scheduler, scheduler=None, safety_checker=None, feature_extractor=pipe.feature_extractor, requires_safety_checker=False, ) img = Image.open("thisismyimage.png") result = img2img(prompt,img,strength,num_inference_steps=4) * Apply suggestions from code review Fix name formatting for scheduler Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com> * update readme (and run formatter on latent_consistency_img2img.py) --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 23 Oct, 2023 3 commits
-
-
Patrick von Platen authored
-
Shyam Marjit authored
Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Andrei Filatov authored
Right now, only "main" branch has this community pipeline code. So, adding it manually into pipeline
-
- 18 Oct, 2023 3 commits
-
-
linjiapro authored
* wip * wip --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
Liang Hou authored
-
Patrick von Platen authored
* Add latent consistency * Update examples/community/README.md * Add latent consistency * make fix copies * Apply suggestions from code review
-
- 17 Oct, 2023 1 commit
-
-
Susheel Thapa authored
-
- 16 Oct, 2023 4 commits
-
-
Sayak Paul authored
* update training examples to use HFAPI. * update training example. * reflect the changes in the korean version too. * Empty-Commit
-
Heinz-Alexander Fuetterer authored
* chore: fix typos * Update src/diffusers/pipelines/shap_e/renderer.py Co-authored-by:
psychedelicious <4822129+psychedelicious@users.noreply.github.com> --------- Co-authored-by:
psychedelicious <4822129+psychedelicious@users.noreply.github.com>
-
Kashif Rasul authored
* initial script * formatting * prior trainer wip * add efficient_net_encoder * add CLIPTextModel * add prior ema support * optimizer * fix typo * add dataloader * prompt_embeds and image_embeds * intial training loop * fix output_dir * fix add_noise * accelerator check * make effnet_transforms dynamic * fix training loop * add validation logging * use loaded text_encoder * use PreTrainedTokenizerFast * load weigth from pickle * save_model_card * remove unused file * fix typos * save prior pipeilne in its own folder * fix imports * fix pipe_t2i * scale image_embeds * remove snr_gamma * format * initial lora prior training * log_validation and save * initial gradient working * remove save/load hooks * set set_attn_processor on prior_prior * add lora script * typos * use LoraLoaderMixin for prior pipeline * fix usage * make fix-copies * yse repo_id * write_lora_layers is a staitcmethod * use defualts * fix defaults * undo * Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_prior.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/loaders.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/loaders.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/wuerstchen/modeling_wuerstchen_prior.py * Update src/diffusers/loaders.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/loaders.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * add graident checkpoint support to prior * gradient_checkpointing * formatting * Update examples/wuerstchen/text_to_image/README.md Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update examples/wuerstchen/text_to_image/README.md Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update examples/wuerstchen/text_to_image/README.md Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update examples/wuerstchen/text_to_image/README.md Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update examples/wuerstchen/text_to_image/README.md Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update examples/wuerstchen/text_to_image/train_text_to_image_lora_prior.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/loaders.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update examples/wuerstchen/text_to_image/train_text_to_image_prior.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * use default unet and text_encoder * fix test --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Sayak Paul authored
* fix: unconditional generation example * fix: float in loss. * apply styling.
-
- 11 Oct, 2023 2 commits
-
-
Sayak Paul authored
* use loralinear instead of depecrecated lora attn procs. * fix parameters() * fix saving * add back support for add kv proj. * fix: param accumul,ation. * propagate the changes.
-
ssusie authored
* Added mark_step for sdxl to run with pytorch xla. Also updated README with instructions for xla * adding soft dependency on torch_xla * fix some styling --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 10 Oct, 2023 2 commits
-
-
Humphrey009 authored
fix problem of 'accelerator.is_main_process' to run in mutiple GPUs or NPUs Co-authored-by:jiaqiw <wangjiaqi50@huawei.com>
-
Julien Simon authored
Update requirements_sdxl.txt Add missing 'datasets' Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 09 Oct, 2023 2 commits
-
-
Pu Cao authored
* Update train_custom_diffusion.py * make style * Empty-Commit --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
chuzh authored
Fix [core/GLIGEN]: TypeError when iterating over 0-d tensor with In-painting mode when EulerAncestralDiscreteScheduler is used (#5305) * fix(gligen_inpaint_pipeline):
🐛 Wrap the timestep() 0-d tensor in a list to convert to 1-d tensor. This avoids the TypeError caused by trying to directly iterate over a 0-dimensional tensor in the denoising stage * test(gligen/gligen_text_image): unit test using the EulerAncestralDiscreteScheduler --------- Co-authored-by:zhen-hao.chu <zhen-hao.chu@vitrox.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 08 Oct, 2023 1 commit
-
-
Zeng Xian authored
-
- 05 Oct, 2023 1 commit
-
-
Bagheera authored
Min-SNR Gamma: correct the fix for SNR weighted loss in v-prediction by adding 1 to SNR rather than the resulting loss weights Co-authored-by:
bghira <bghira@users.github.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 03 Oct, 2023 1 commit
-
-
Patrick von Platen authored
* [SDXL Flax] Add research folder * Add co-author Co-authored-by:
Juan Acevedo <jfacevedo@google.com> --------- Co-authored-by:
Juan Acevedo <jfacevedo@google.com>
-
- 02 Oct, 2023 2 commits
-
-
Patrick von Platen authored
* fix all * make fix copies * make fix copies
-
Sayak Paul authored
* fix: how print training resume logs. * propagate changes to text-to-image scripts. * propagate changes to instructpix2pix. * propagate changes to dreambooth * propagate changes to custom diffusion and instructpix2pix * propagate changes to kandinsky * propagate changes to textual inv. * debug * fix: checkpointing. * debug * debug * debug * back to the square * debug * debug * change condition order. * debug * debug * debug * debug * revert to original * clean --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 28 Sep, 2023 1 commit
-
-
Nicholas Bardy authored
Update README_sdxl.md
-
- 27 Sep, 2023 2 commits
-
-
Benjamin Paine authored
* Update run_onnx_controlnet.py * Update run_tensorrt_controlnet.py
-
Sayak Paul authored
add compute_snr() to training utils.
-
- 26 Sep, 2023 2 commits
-
-
Bagheera authored
* merge with main * fix flax example * fix onnx example --------- Co-authored-by:
bghira <bghira@users.github.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Bagheera authored
* Timestep bias for fine-tuning SDXL * Adjust parameter choices to include "range" and reword the help statements * Condition our use of weighted timesteps on the value of timestep_bias_strategy * style --------- Co-authored-by:
bghira <bghira@users.github.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-