- 14 Jun, 2025 1 commit
-
-
Edna authored
* working state from hameerabbasi and iddl * working state form hameerabbasi and iddl (transformer) * working state (normalization) * working state (embeddings) * add chroma loader * add chroma to mappings * add chroma to transformer init * take out variant stuff * get decently far in changing variant stuff * add chroma init * make chroma output class * add chroma transformer to dummy tp * add chroma to init * add chroma to init * fix single file * update * update * add chroma to auto pipeline * add chroma to pipeline init * change to chroma transformer * take out variant from blocks * swap embedder location * remove prompt_2 * work on swapping text encoders * remove mask function * dont modify mask (for now) * wrap attn mask * no attn mask (can't get it to work) * remove pooled prompt embeds * change to my own unpooled embeddeer * fix load * take pooled projections out of transformer * ensure correct dtype for chroma embeddings * update * use dn6 attn mask + fix true_cfg_scale * use chroma pipeline output * use DN6 embeddings * remove guidance * remove guidance embed (pipeline) * remove guidance from embeddings * don't return length * dont change dtype * remove unused stuff, fix up docs * add chroma autodoc * add .md (oops) * initial chroma docs * undo don't change dtype * undo arxiv change unsure why that happened * fix hf papers regression in more places * Update docs/source/en/api/pipelines/chroma.md Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * do_cfg -> self.do_classifier_free_guidance * Update docs/source/en/api/models/chroma_transformer.md Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Update chroma.md * Move chroma layers into transformer * Remove pruned AdaLayerNorms * Add chroma fast tests * (untested) batch cond and uncond * Add # Copied from for shift * Update # Copied from statements * update norm imports * Revert cond + uncond batching * Add transformer tests * move chroma test (oops) * chroma init * fix chroma pipeline fast tests * Update src/diffusers/models/transformers/transformer_chroma.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Move Approximator and Embeddings * Fix auto pipeline + make style, quality * make style * Apply style fixes * switch to new input ids * fix # Copied from error * remove # Copied from on protected members * try to fix import * fix import * make fix-copes * revert style fix * update chroma transformer params * update chroma transformer approximator init params * update to pad tokens * fix batch inference * Make more pipeline tests work * Make most transformer tests work * fix docs * make style, make quality * skip batch tests * fix test skipping * fix test skipping again * fix for tests * Fix all pipeline test * update * push local changes, fix docs * add encoder test, remove pooled dim * default proj dim * fix tests * fix equal size list input * update * push local changes, fix docs * add encoder test, remove pooled dim * default proj dim * fix tests * fix equal size list input * Revert "fix equal size list input" This reverts commit 3fe4ad67d58d83715bc238f8654f5e90bfc5653c. * update * update * update * update * update --------- Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 19 May, 2025 1 commit
-
-
Dhruv Nair authored
update
-
- 15 May, 2025 1 commit
-
-
Dhruv Nair authored
* update * update * update * update * update * update * update
-
- 01 May, 2025 1 commit
-
-
co63oc authored
* Fix typos in docs and comments * Apply style fixes --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 16 Apr, 2025 1 commit
-
-
Sayak Paul authored
* enable telemetry for single file loading when using GGUF. * quality
-
- 10 Apr, 2025 1 commit
-
-
hlky authored
-
- 04 Apr, 2025 2 commits
-
-
Dhruv Nair authored
update
-
Kenneth Gerald Hamilton authored
* Fixed requests.get function call by adding timeout parameter. * declare DIFFUSERS_REQUEST_TIMEOUT in constants and import when needed * remove unneeded os import * Apply style fixes --------- Co-authored-by:
Sai-Suraj-27 <sai.suraj.27.729@gmail.com> Co-authored-by:
github-actions[bot] <github-actions[bot]@users.noreply.github.com>
-
- 10 Mar, 2025 1 commit
-
-
Ishan Modi authored
* added support for from_single_file * added diffusers mapping script * added testcase * bug fix * updated tests * corrected code quality * corrected code quality --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
- 07 Mar, 2025 1 commit
-
-
Dhruv Nair authored
* update * update * update * update * update * update * update
-
- 06 Mar, 2025 1 commit
-
-
Dhruv Nair authored
update
-
- 03 Mar, 2025 1 commit
-
-
Teriks authored
* Fix SD2.X clip single file load projection_dim Infer projection_dim from the checkpoint before loading from pretrained, override any incorrect hub config. Hub configuration for SD2.X specifies projection_dim=512 which is incorrect for SD2.X checkpoints loaded from civitai and similar. Exception was previously thrown upon attempting to load_model_dict_into_meta for SD2.X single file checkpoints. Such LDM models usually require projection_dim=1024 * convert_open_clip_checkpoint use hidden_size for text_proj_dim * convert_open_clip_checkpoint, revert checkpoint[text_proj_key].shape[1] -> [0] values are identical --------- Co-authored-by:
Teriks <Teriks@users.noreply.github.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 19 Feb, 2025 1 commit
-
-
Marc Sun authored
* first draft model loading refactor * revert name change * fix bnb * revert name * fix dduf * fix huanyan * style * Update src/diffusers/models/model_loading_utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * suggestions from reviews * Update src/diffusers/models/modeling_utils.py Co-authored-by:
YiYi Xu <yixu310@gmail.com> * remove safetensors check * fix default value * more fix from suggestions * revert logic for single file * style * typing + fix couple of issues * improve speed * Update src/diffusers/models/modeling_utils.py Co-authored-by:
Aryan <aryan@huggingface.co> * fp8 dtype * add tests * rename resolved_archive_file to resolved_model_file * format * map_location default cpu * add utility function * switch to smaller model + test inference * Apply suggestions from code review Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * rm comment * add log * Apply suggestions from code review Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * add decorator * cosine sim instead * fix use_keep_in_fp32_modules * comm --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com> Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 12 Feb, 2025 1 commit
-
-
Dhruv Nair authored
* update * update
-
- 21 Jan, 2025 1 commit
-
-
Sayak Paul authored
change licensing to 2025 from 2024.
-
- 13 Jan, 2025 2 commits
-
-
Dhruv Nair authored
* update * update * update * update --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
hlky authored
-
- 10 Jan, 2025 1 commit
-
-
Daniel Hipke authored
Add a `disable_mmap` option to the `from_single_file` loader to improve load performance on network mounts (#10305) * Add no_mmap arg. * Fix arg parsing. * Update another method to force no mmap. * logging * logging2 * propagate no_mmap * logging3 * propagate no_mmap * logging4 * fix open call * clean up logging * cleanup * fix missing arg * update logging and comments * Rename to disable_mmap and update other references. * [Docs] Update ltx_video.md to remove generator from `from_pretrained()` (#10316) Update ltx_video.md to remove generator from `from_pretrained()` * docs: fix a mistake in docstring (#10319) Update pipeline_hunyuan_video.py docs: fix a mistake * [BUG FIX] [Stable Audio Pipeline] Resolve torch.Tensor.new_zeros() TypeError in function prepare_latents caused by audio_vae_length (#10306) [BUG FIX] [Stable Audio Pipeline] TypeError: new_zeros(): argument 'size' failed to unpack the object at pos 3 with error "type must be tuple of ints,but got float" torch.Tensor.new_zeros() takes a single argument size (int...) – a list, tuple, or torch.Size of integers defining the shape of the output tensor. in function prepare_latents: audio_vae_length = self.transformer.config.sample_size * self.vae.hop_length audio_shape = (batch_size // num_waveforms_per_prompt, audio_channels, audio_vae_length) ... audio = initial_audio_waveforms.new_zeros(audio_shape) audio_vae_length evaluates to float because self.transformer.config.sample_size returns a float Co-authored-by:
hlky <hlky@hlky.ac> * [docs] Fix quantization links (#10323) Update overview.md * [Sana]add 2K related model for Sana (#10322) add 2K related model for Sana * Update src/diffusers/loaders/single_file_model.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * Update src/diffusers/loaders/single_file.py Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com> * make style --------- Co-authored-by:
hlky <hlky@hlky.ac> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Leojc <liao_junchao@outlook.com> Co-authored-by:
Aditya Raj <syntaxticsugr@gmail.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
Junsong Chen <cjs1020440147@icloud.com> Co-authored-by:
Dhruv Nair <dhruv.nair@gmail.com>
-
- 08 Jan, 2025 1 commit
-
-
AstraliteHeart authored
* Add support for loading AuraFlow models from GGUF https://huggingface.co/city96/AuraFlow-v0.3-gguf * Update AuraFlow documentation for GGUF, add GGUF tests and model detection. * Address code review comments. * Remove unused config. --------- Co-authored-by:
hlky <hlky@hlky.ac>
-
- 06 Jan, 2025 1 commit
-
-
hlky authored
* Add torch_xla and from_single_file to instruct-pix2pix * StableDiffusionInstructPix2PixPipelineSingleFileSlowTests * StableDiffusionInstructPix2PixPipelineSingleFileSlowTests --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 23 Dec, 2024 3 commits
-
-
Aryan authored
* update * make style * update * update * update * make style * single file related changes * update * fix * update single file urls and docs * update * fix
-
Dhruv Nair authored
update
-
Dhruv Nair authored
* update * Update src/diffusers/loaders/single_file_utils.py Co-authored-by:
Aryan <aryan@huggingface.co> --------- Co-authored-by:
Aryan <aryan@huggingface.co>
-
- 20 Dec, 2024 1 commit
-
-
Dhruv Nair authored
* update * add docs. --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 19 Dec, 2024 1 commit
-
-
Dhruv Nair authored
update
-
- 18 Dec, 2024 1 commit
-
-
Dhruv Nair authored
update
-
- 17 Dec, 2024 1 commit
-
-
Dhruv Nair authored
* update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * update * Update src/diffusers/quantizers/gguf/utils.py Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * update * update * update * update * update * update * update * update * update * update * Update docs/source/en/quantization/gguf.md Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * update * update * update * update --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
- 12 Dec, 2024 1 commit
-
-
Aryan authored
* transformer * make style & make fix-copies * transformer * add transformer tests * 80% vae * make style * make fix-copies * fix * undo cogvideox changes * update * update * match vae * add docs * t2v pipeline working; scheduler needs to be checked * docs * add pipeline test * update * update * make fix-copies * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * update * copy t2v to i2v pipeline * update * apply review suggestions * update * make style * remove framewise encoding/decoding * pack/unpack latents * image2video * update * make fix-copies * update * update * rope scale fix * debug layerwise code * remove debug * Apply suggestions from code review Co-authored-by:
YiYi Xu <yixu310@gmail.com> * propagate precision changes to i2v pipeline * remove downcast * address review comments * fix comment * address review comments * [Single File] LTX support for loading original weights (#10135) * from original file mixin for ltx * undo config mapping fn changes * update * add single file to pipelines * update docs * Update src/diffusers/models/autoencoders/autoencoder_kl_ltx.py * Update src/diffusers/models/autoencoders/autoencoder_kl_ltx.py * rename classes based on ltx review * point to original repository for inference * make style * resolve conflicts correctly --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> Co-authored-by:
YiYi Xu <yixu310@gmail.com>
-
- 11 Dec, 2024 1 commit
-
-
Dhruv Nair authored
* update * update * update
-
- 02 Dec, 2024 1 commit
-
-
Dhruv Nair authored
update
-
- 22 Nov, 2024 1 commit
-
-
hlky authored
Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
- 22 Oct, 2024 1 commit
-
-
Dhruv Nair authored
* update * update * update * update * update * update
-
- 24 Sep, 2024 1 commit
-
-
YiYi Xu authored
* update sd15 repo * update more
-
- 30 Aug, 2024 1 commit
-
-
YiYi Xu authored
update to a place holder
-
- 23 Aug, 2024 1 commit
-
-
Dhruv Nair authored
update
-
- 21 Aug, 2024 1 commit
-
-
Dhruv Nair authored
update
-
- 20 Aug, 2024 1 commit
-
-
Vishnu V Jaddipal authored
-
- 07 Aug, 2024 2 commits
-
-
Aryan authored
* allow sparsectrl to be loaded with single file * update --------- Co-authored-by:Dhruv Nair <dhruv.nair@gmail.com>
-
Dhruv Nair authored
* update * update * update --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
- 18 Jul, 2024 1 commit
-
-
Sayak Paul authored
* remove resume_download * fix: _fetch_index_file call. * remove resume_download from docs.
-