- 26 Sep, 2023 4 commits
-
-
Patrick von Platen authored
* fix SDXL flax init * finish * Fix
-
Pedro Cuenca authored
* timestep_spacing for FlaxDPMSolverMultistepScheduler * Style
-
Ernie Chu authored
``` do_binarize (`bool`, *optional*, defaults to `True`) | v do_binarize (`bool`, *optional*, defaults to `False`) ```
-
Dhruv Nair authored
* test fix * fix tests * fix report name --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 25 Sep, 2023 12 commits
-
-
Patrick von Platen authored
-
Ryan Dick authored
* Fix FullAdapterXL.total_downscale_factor. * Fix incorrect error message in T2IAdapter.__init__(...). * Move IP-Adapter test_total_downscale_factor(...) to pipeline test file (requested in code review). * Add more info to error message about an unsupported T2I-Adapter adapter_type. --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Hengwen Tong authored
Make sure the repo_id is valid before sending it to huggingface_hub to get a more understandable error message. Re #5110 Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Bagheera authored
SDXL microconditioning documentation should indicate the correct default order of parameters, so that developers know (#5155) * SDXL microconditioning documentation should indicate the correct default order of parameters, so that developers know * SDXL microconditioning documentation should indicate the correct default order of parameters, so that developers know * empty --------- Co-authored-by:
bghira <bghira@users.github.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Patrick von Platen authored
-
Carson Katri authored
-
Anh71me authored
* Fix type annotation on Scheduler.from_pretrained * Fix type annotation on PIL.Image
-
Patrick von Platen authored
* [Doc builder] Ensure slow import for doc builder * Apply suggestions from code review * env for doc builder * fix more * [Diffusers] Set import to slow as env variable * fix docs * fix docs * Apply suggestions from code review * Apply suggestions from code review * fix docs * fix docs
-
Patrick von Platen authored
* fix cpu offload * fix * fix * Update src/diffusers/pipelines/pipeline_utils.py * make style * Apply suggestions from code review Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * fix more * fix more --------- Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Dhruv Nair authored
cpu offload fix for blip diffusion
-
Patrick von Platen authored
* add is flaky decorator * fix more
-
Sayak Paul authored
* print * print * print * print * print * debugging * debugging * debugging * debugging * safer condition. * remove prints and try excepts. * Empty-Commit * Apply suggestions from code review --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 23 Sep, 2023 1 commit
-
-
YiYi Xu authored
* remove to _device() for sigmas * update add_noise to use simgas --------- Co-authored-by:yiyixuxu <yixu310@gmail,com>
-
- 22 Sep, 2023 4 commits
-
-
Pedro Cuenca authored
* support transformer_layers_per block in flax UNet * add support for text_time additional embeddings to Flax UNet * rename attention layers for VAE * add shape asserts when renaming attention layers * transpose VAE attention layers * add pipeline flax SDXL code [WIP] * continue add pipeline flax SDXL code [WIP] * cleanup * Working on JIT support Fixed prompt embedding shapes so they work in parallel mode. Assuming we always have both text encoders for now, for simplicity. * Fixing embeddings (untested) * Remove spurious line * Shard guidance_scale when jitting. * Decode images * Fix sharding * style * Refiner UNet can be loaded. * Refiner / img2img pipeline * Allow latent outputs from base and latent inputs in refiner This makes it possible to chain base + refiner without having to use the vae decoder in the base model, the vae encoder in the refiner, skipping conversions to/from PIL, and avoiding TPU <-> CPU memory copies. * Adapt to FlaxCLIPTextModelOutput * Update Flax XL pipeline to FlaxCLIPTextModelOutput * make fix-copies * make style * add euler scheduler * Fix import * Fix copies, comment unused code. * Fix SDXL Flax imports * Fix euler discrete begin * improve init import * finish * put discrete euler in init * fix flax euler * Fix more * make style * correct init * correct init * Temporarily remove FlaxStableDiffusionXLImg2ImgPipeline * correct pipelines * finish --------- Co-authored-by:
Martin Müller <martin.muller.me@gmail.com> Co-authored-by:
patil-suraj <surajp815@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Pedro Cuenca authored
* SDXL: update links to refine docs * make style
-
Younes Belkada authored
* more fixes * up * up * style * add in setup * oops * more changes * v1 rzfactor CI * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * few todos * protect torch import * style * fix fuse text encoder * Update src/diffusers/loaders.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * replace with `recurse_replace_peft_layers` * keep old modules for BC * adjustments on `adjust_lora_scale_text_encoder` * nit * move tests * add conversion utils * remove unneeded methods * use class method instead * oops * use `base_version` * fix examples * fix CI * fix weird error with python 3.8 * fix * better fix * style * Apply suggestions from code review Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * add comment * Apply suggestions from code review Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * conv2d support for recurse remove * added docstrings * more docstring * add deprecate * revert * try to fix merge conflicts * v1 tests * add new decorator * add saving utilities test * adapt tests a bit * add save / from_pretrained tests * add saving tests * add scale tests * fix deps tests * fix lora CI * fix tests * add comment * fix * style * add slow tests * slow tests pass * style * Update src/diffusers/utils/import_utils.py Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * Apply suggestions from code review Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com> * circumvents pattern finding issue * left a todo * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * update hub path * add lora workflow * fix --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
-
hysts authored
Add missing parenthesis in the sample code of BLIP Diffusion
-
- 21 Sep, 2023 3 commits
-
-
YiYi Xu authored
--------- Co-authored-by:yiyixuxu <yixu310@gmail,com>
-
YiYi Xu authored
fix and add tests Co-authored-by:yiyixuxu <yixu310@gmail,com>
-
Ayush Mangal authored
* Add BLIP Diffusion skeleton * Add other model components * Add BLIP2, need to change it for now * Fix pipeline imports * Load pretrained ViT * Make qformer fwd pass same * Replicate fwd passes * Fix device bug * Add accelerate functions * Remove extra functions from Blip2 * Minor bug * Integrate initial review changes * Refactoring * Refactoring * Refactor * Add controlnet * Refactor * Update conversion script * Add image processor * Shift postprocessing to ImageProcessor * Refactor * Fix device * Add fast tests * Update conversion script * Fix checkpoint conversion script * Integrate review changes * Integrate reivew changes * Remove unused functions from test * Reuse HF image processor in Cond image * Create new BlipImageProcessor based on transfomers * Fix image preprocessor * Minor * Minor * Add canny preprocessing * Fix controlnet preprocessing * Fix blip diffusion test * Add controlnet test * Add initial doc strings * Integrate review changes * Refactor * Update examples * Remove DDIM comments * Add copied from for prepare_latents * Add type anotations * Add docstrings * Do black formatting * Add batch support * Make tests pass * Make controlnet tests pass * Black formatting * Fix progress bar * Fix some licensing comments * Fix imports * Refactor controlnet * Make tests faster * Edit examples * Black formatting/Ruff * Add doc * Minor Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Move controlnet pipeline * Make tests faster * Fix imports * Fix formatting * Fix make errors * Fix make errors * Minor * Add suggested doc changes Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Edit docs * Fix 16 bit loading * Update examples * Edit toctree * Update docs/source/en/api/pipelines/blip_diffusion.md Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Minor * Add tips * Edit examples * Update model paths --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
- 20 Sep, 2023 1 commit
-
-
Sayak Paul authored
* better condition. * debugging * how about now? * how about now? * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * debugging * support for lycoris. * style * add: lycoris test * fix from_pretrained call. * fix assertion values.
-
- 19 Sep, 2023 4 commits
-
-
YiYi Xu authored
--------- Co-authored-by:
yiyixuxu <yixu310@gmail,com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Sayak Paul authored
* core: add support for clip ckip to SDXL * add clip_skip support to the rest of the pipeline. * Empty-Commit
-
Patrick von Platen authored
* [SDXL] Make sure multi batch prompt embeds works * [SDXL] Make sure multi batch prompt embeds works * improve more * improve more * Apply suggestions from code review
-
Sayak Paul authored
* add support for clip skip * fix condition * fix * add clip_output_layer_to_default * expose * remove the previous functions. * correct condition. * apply final layer norm * address feedback * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * refactor clip_skip. * port to the other pipelines. * fix copies one more time --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 18 Sep, 2023 7 commits
-
-
Sayak Paul authored
* don't break offloading for incompatible lora ckpts. * debugging * better condition. * fix * fix * fix * fix --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Will Berman authored
remove adapter weights in MultiAdapter constructor
-
Will Berman authored
* convert tensorrt controlnet * Fix code quality * Fix code quality * Fix code quality * Fix code quality * Fix code quality * Fix code quality * Fix number controlnet condition * Add convert SD XL to onnx * Add convert SD XL to tensorrt * Add convert SD XL to tensorrt * Add examples in comments * Add examples in comments * Add test onnx controlnet * Add tensorrt test * Remove copied * Move file test to examples/community * Remove script * Remove script * Remove text * Fix import * Fix T2I MultiAdapter * fix tests --------- Co-authored-by:
dotieuthien <thien.do@mservice.com.vn> Co-authored-by:
dotieuthien <dotieuthien9997@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
dotieuthien <hades@cinnamon.is>
-
Ruoxi authored
* Implement `CustomDiffusionAttnProcessor2_0` * Doc-strings and type annotations for `CustomDiffusionAttnProcessor2_0`. (#1) * Update attnprocessor.md * Update attention_processor.py * Interops for `CustomDiffusionAttnProcessor2_0`. * Formatted `attention_processor.py`. * Formatted doc-string in `attention_processor.py` * Conditional CustomDiffusion2_0 for training example. * Remove unnecessary reference impl in comments. * Fix `save_attn_procs`.
-
Patrick von Platen authored
* [Textual inversion] Clean loading * [Textual inversion] Clean loading * [Textual inversion] Clean up * [Textual inversion] Clean up * [Textual inversion] Clean up * [Textual inversion] Clean up
-
YiYi Xu authored
* fix * fix num_images_per_prompt >1 * other pipelines * add fast tests for inpaint pipelines --------- Co-authored-by:yiyixuxu <yixu310@gmail,com>
-
Lee Dong Joo authored
-
- 16 Sep, 2023 1 commit
-
-
Kashif Rasul authored
* [LoRA] fix typo in attention_processor.py fixes #5062 * make style * make fix-copies, logger comented for torch compile
-
- 15 Sep, 2023 3 commits
-
-
Kashif Rasul authored
* fix typos in docs * fix for issue #5023
-
Bagheera authored
Remove logger.info statement from Unet2DCondition code to ensure torch compile reliably succeeds (#4982) * Remove logger.info statement from Unet2DCondition code to ensure torch compile reliably succeeds * Convert logging statement to a comment for future archaeologists * Update src/diffusers/models/unet_2d_condition.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> --------- Co-authored-by:
bghira <bghira@users.github.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
dg845 authored
* Add attn_groups argument to UNet2DMidBlock2D to control theinternal Attention block's GroupNorm. * Add docstring for attn_norm_num_groups in UNet2DModel. * Since the test UNet config uses resnet_time_scale_shift == 'scale_shift', also set attn_norm_num_groups to 32. * Add test for attn_norm_num_groups to UNet2DModelTests. * Fix expected slices for slow tests. * Also fix tolerances for slow tests. --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-