"vscode:/vscode.git/clone" did not exist on "f655e6a738f2533211404ce9bcd3ba85a420345d"
  1. 13 Aug, 2025 1 commit
  2. 08 Aug, 2025 1 commit
  3. 15 Jul, 2025 1 commit
  4. 11 Jul, 2025 1 commit
  5. 02 Jul, 2025 1 commit
  6. 01 Jul, 2025 1 commit
  7. 14 Jun, 2025 1 commit
    • Edna's avatar
      Chroma Pipeline (#11698) · 8adc6003
      Edna authored
      
      
      * working state from hameerabbasi and iddl
      
      * working state form hameerabbasi and iddl (transformer)
      
      * working state (normalization)
      
      * working state (embeddings)
      
      * add chroma loader
      
      * add chroma to mappings
      
      * add chroma to transformer init
      
      * take out variant stuff
      
      * get decently far in changing variant stuff
      
      * add chroma init
      
      * make chroma output class
      
      * add chroma transformer to dummy tp
      
      * add chroma to init
      
      * add chroma to init
      
      * fix single file
      
      * update
      
      * update
      
      * add chroma to auto pipeline
      
      * add chroma to pipeline init
      
      * change to chroma transformer
      
      * take out variant from blocks
      
      * swap embedder location
      
      * remove prompt_2
      
      * work on swapping text encoders
      
      * remove mask function
      
      * dont modify mask (for now)
      
      * wrap attn mask
      
      * no attn mask (can't get it to work)
      
      * remove pooled prompt embeds
      
      * change to my own unpooled embeddeer
      
      * fix load
      
      * take pooled projections out of transformer
      
      * ensure correct dtype for chroma embeddings
      
      * update
      
      * use dn6 attn mask + fix true_cfg_scale
      
      * use chroma pipeline output
      
      * use DN6 embeddings
      
      * remove guidance
      
      * remove guidance embed (pipeline)
      
      * remove guidance from embeddings
      
      * don't return length
      
      * dont change dtype
      
      * remove unused stuff, fix up docs
      
      * add chroma autodoc
      
      * add .md (oops)
      
      * initial chroma docs
      
      * undo don't change dtype
      
      * undo arxiv change
      
      unsure why that happened
      
      * fix hf papers regression in more places
      
      * Update docs/source/en/api/pipelines/chroma.md
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * do_cfg -> self.do_classifier_free_guidance
      
      * Update docs/source/en/api/models/chroma_transformer.md
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * Update chroma.md
      
      * Move chroma layers into transformer
      
      * Remove pruned AdaLayerNorms
      
      * Add chroma fast tests
      
      * (untested) batch cond and uncond
      
      * Add # Copied from for shift
      
      * Update # Copied from statements
      
      * update norm imports
      
      * Revert cond + uncond batching
      
      * Add transformer tests
      
      * move chroma test (oops)
      
      * chroma init
      
      * fix chroma pipeline fast tests
      
      * Update src/diffusers/models/transformers/transformer_chroma.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * Move Approximator and Embeddings
      
      * Fix auto pipeline + make style, quality
      
      * make style
      
      * Apply style fixes
      
      * switch to new input ids
      
      * fix # Copied from error
      
      * remove # Copied from on protected members
      
      * try to fix import
      
      * fix import
      
      * make fix-copes
      
      * revert style fix
      
      * update chroma transformer params
      
      * update chroma transformer approximator init params
      
      * update to pad tokens
      
      * fix batch inference
      
      * Make more pipeline tests work
      
      * Make most transformer tests work
      
      * fix docs
      
      * make style, make quality
      
      * skip batch tests
      
      * fix test skipping
      
      * fix test skipping again
      
      * fix for tests
      
      * Fix all pipeline test
      
      * update
      
      * push local changes, fix docs
      
      * add encoder test, remove pooled dim
      
      * default proj dim
      
      * fix tests
      
      * fix equal size list input
      
      * update
      
      * push local changes, fix docs
      
      * add encoder test, remove pooled dim
      
      * default proj dim
      
      * fix tests
      
      * fix equal size list input
      
      * Revert "fix equal size list input"
      
      This reverts commit 3fe4ad67d58d83715bc238f8654f5e90bfc5653c.
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      8adc6003
  8. 19 May, 2025 1 commit
  9. 15 May, 2025 1 commit
  10. 01 May, 2025 1 commit
  11. 16 Apr, 2025 1 commit
  12. 10 Apr, 2025 1 commit
  13. 04 Apr, 2025 2 commits
  14. 10 Mar, 2025 1 commit
  15. 07 Mar, 2025 1 commit
  16. 06 Mar, 2025 1 commit
  17. 03 Mar, 2025 1 commit
    • Teriks's avatar
      Fix SD2.X clip single file load projection_dim (#10770) · 9e910c46
      Teriks authored
      
      
      * Fix SD2.X clip single file load projection_dim
      
      Infer projection_dim from the checkpoint before loading
      from pretrained, override any incorrect hub config.
      
      Hub configuration for SD2.X specifies projection_dim=512
      which is incorrect for SD2.X checkpoints loaded from civitai
      and similar.
      
      Exception was previously thrown upon attempting to
      load_model_dict_into_meta for SD2.X single file checkpoints.
      
      Such LDM models usually require projection_dim=1024
      
      * convert_open_clip_checkpoint use hidden_size for text_proj_dim
      
      * convert_open_clip_checkpoint, revert checkpoint[text_proj_key].shape[1] -> [0]
      
      values are identical
      
      ---------
      Co-authored-by: default avatarTeriks <Teriks@users.noreply.github.com>
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      9e910c46
  18. 19 Feb, 2025 1 commit
  19. 12 Feb, 2025 1 commit
  20. 21 Jan, 2025 1 commit
  21. 13 Jan, 2025 2 commits
  22. 10 Jan, 2025 1 commit
    • Daniel Hipke's avatar
      Add a `disable_mmap` option to the `from_single_file` loader to improve load... · 52c05bd4
      Daniel Hipke authored
      
      Add a `disable_mmap` option to the `from_single_file` loader to improve load performance on network mounts (#10305)
      
      * Add no_mmap arg.
      
      * Fix arg parsing.
      
      * Update another method to force no mmap.
      
      * logging
      
      * logging2
      
      * propagate no_mmap
      
      * logging3
      
      * propagate no_mmap
      
      * logging4
      
      * fix open call
      
      * clean up logging
      
      * cleanup
      
      * fix missing arg
      
      * update logging and comments
      
      * Rename to disable_mmap and update other references.
      
      * [Docs] Update ltx_video.md to remove generator from `from_pretrained()` (#10316)
      
      Update ltx_video.md to remove generator from `from_pretrained()`
      
      * docs: fix a mistake in docstring (#10319)
      
      Update pipeline_hunyuan_video.py
      
      docs: fix a mistake
      
      * [BUG FIX] [Stable Audio Pipeline] Resolve torch.Tensor.new_zeros() TypeError in function prepare_latents caused by audio_vae_length (#10306)
      
      [BUG FIX] [Stable Audio Pipeline] TypeError: new_zeros(): argument 'size' failed to unpack the object at pos 3 with error "type must be tuple of ints,but got float"
      
      torch.Tensor.new_zeros() takes a single argument size (int...) – a list, tuple, or torch.Size of integers defining the shape of the output tensor.
      
      in function prepare_latents:
      audio_vae_length = self.transformer.config.sample_size * self.vae.hop_length
      audio_shape = (batch_size // num_waveforms_per_prompt, audio_channels, audio_vae_length)
      ...
      audio = initial_audio_waveforms.new_zeros(audio_shape)
      
      audio_vae_length evaluates to float because self.transformer.config.sample_size returns a float
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * [docs] Fix quantization links (#10323)
      
      Update overview.md
      
      * [Sana]add 2K related model for Sana (#10322)
      
      add 2K related model for Sana
      
      * Update src/diffusers/loaders/single_file_model.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * Update src/diffusers/loaders/single_file.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * make style
      
      ---------
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarLeojc <liao_junchao@outlook.com>
      Co-authored-by: default avatarAditya Raj <syntaxticsugr@gmail.com>
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      Co-authored-by: default avatarJunsong Chen <cjs1020440147@icloud.com>
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      52c05bd4
  23. 08 Jan, 2025 1 commit
  24. 06 Jan, 2025 1 commit
  25. 23 Dec, 2024 3 commits
  26. 20 Dec, 2024 1 commit
  27. 19 Dec, 2024 1 commit
  28. 18 Dec, 2024 1 commit
  29. 17 Dec, 2024 1 commit
  30. 12 Dec, 2024 1 commit
    • Aryan's avatar
      [core] LTX Video (#10021) · 96c376a5
      Aryan authored
      
      
      * transformer
      
      * make style & make fix-copies
      
      * transformer
      
      * add transformer tests
      
      * 80% vae
      
      * make style
      
      * make fix-copies
      
      * fix
      
      * undo cogvideox changes
      
      * update
      
      * update
      
      * match vae
      
      * add docs
      
      * t2v pipeline working; scheduler needs to be checked
      
      * docs
      
      * add pipeline test
      
      * update
      
      * update
      
      * make fix-copies
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * update
      
      * copy t2v to i2v pipeline
      
      * update
      
      * apply review suggestions
      
      * update
      
      * make style
      
      * remove framewise encoding/decoding
      
      * pack/unpack latents
      
      * image2video
      
      * update
      
      * make fix-copies
      
      * update
      
      * update
      
      * rope scale fix
      
      * debug layerwise code
      
      * remove debug
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * propagate precision changes to i2v pipeline
      
      * remove downcast
      
      * address review comments
      
      * fix comment
      
      * address review comments
      
      * [Single File] LTX support for loading original weights (#10135)
      
      * from original file mixin for ltx
      
      * undo config mapping fn changes
      
      * update
      
      * add single file to pipelines
      
      * update docs
      
      * Update src/diffusers/models/autoencoders/autoencoder_kl_ltx.py
      
      * Update src/diffusers/models/autoencoders/autoencoder_kl_ltx.py
      
      * rename classes based on ltx review
      
      * point to original repository for inference
      
      * make style
      
      * resolve conflicts correctly
      
      ---------
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      96c376a5
  31. 11 Dec, 2024 1 commit
  32. 02 Dec, 2024 1 commit
  33. 22 Nov, 2024 1 commit
  34. 22 Oct, 2024 1 commit
  35. 24 Sep, 2024 1 commit
  36. 30 Aug, 2024 1 commit