"docs/vscode:/vscode.git/clone" did not exist on "9c095a725a8808f08d603f54fe7bd7a15374a522"
  1. 19 Feb, 2025 1 commit
  2. 15 Feb, 2025 2 commits
    • Yuxuan Zhang's avatar
      CogView4 (supports different length c and uc) (#10649) · d90cd362
      Yuxuan Zhang authored
      
      
      * init
      
      * encode with glm
      
      * draft schedule
      
      * feat(scheduler): Add CogView scheduler implementation
      
      * feat(embeddings): add CogView 2D rotary positional embedding
      
      * 1
      
      * Update pipeline_cogview4.py
      
      * fix the timestep init and sigma
      
      * update latent
      
      * draft patch(not work)
      
      * fix
      
      * [WIP][cogview4]: implement initial CogView4 pipeline
      
      Implement the basic CogView4 pipeline structure with the following changes:
      - Add CogView4 pipeline implementation
      - Implement DDIM scheduler for CogView4
      - Add CogView3Plus transformer architecture
      - Update embedding models
      
      Current limitations:
      - CFG implementation uses padding for sequence length alignment
      - Need to verify transformer inference alignment with Megatron
      
      TODO:
      - Consider separate forward passes for condition/uncondition
        instead of padding approach
      
      * [WIP][cogview4][refactor]: Split condition/uncondition forward pass in CogView4 pipeline
      
      Split the forward pass for conditional and unconditional predictions in the CogView4 pipeline to match the original implementation. The noise prediction is now done separately for each case before combining them for guidance. However, the results still need improvement.
      
      This is a work in progress as the generated images are not yet matching expected quality.
      
      * use with -2 hidden state
      
      * remove text_projector
      
      * 1
      
      * [WIP] Add tensor-reload to align input from transformer block
      
      * [WIP] for older glm
      
      * use with cogview4 transformers forward twice of u and uc
      
      * Update convert_cogview4_to_diffusers.py
      
      * remove this
      
      * use main example
      
      * change back
      
      * reset
      
      * setback
      
      * back
      
      * back 4
      
      * Fix qkv conversion logic for CogView4 to Diffusers format
      
      * back5
      
      * revert to sat to cogview4 version
      
      * update a new convert from megatron
      
      * [WIP][cogview4]: implement CogView4 attention processor
      
      Add CogView4AttnProcessor class for implementing scaled dot-product attention
      with rotary embeddings for the CogVideoX model. This processor concatenates
      encoder and hidden states, applies QKV projections and RoPE, but does not
      include spatial normalization.
      
      TODO:
      - Fix incorrect QKV projection weights
      - Resolve ~25% error in RoPE implementation compared to Megatron
      
      * [cogview4] implement CogView4 transformer block
      
      Implement CogView4 transformer block following the Megatron architecture:
      - Add multi-modulate and multi-gate mechanisms for adaptive layer normalization
      - Implement dual-stream attention with encoder-decoder structure
      - Add feed-forward network with GELU activation
      - Support rotary position embeddings for image tokens
      
      The implementation follows the original CogView4 architecture while adapting
      it to work within the diffusers framework.
      
      * with new attn
      
      * [bugfix] fix dimension mismatch in CogView4 attention
      
      * [cogview4][WIP]: update final normalization in CogView4 transformer
      
      Refactored the final normalization layer in CogView4 transformer to use separate layernorm and AdaLN operations instead of combined AdaLayerNormContinuous. This matches the original implementation but needs validation.
      
      Needs verification against reference implementation.
      
      * 1
      
      * put back
      
      * Update transformer_cogview4.py
      
      * change time_shift
      
      * Update pipeline_cogview4.py
      
      * change timesteps
      
      * fix
      
      * change text_encoder_id
      
      * [cogview4][rope] align RoPE implementation with Megatron
      
      - Implement apply_rope method in attention processor to match Megatron's implementation
      - Update position embeddings to ensure compatibility with Megatron-style rotary embeddings
      - Ensure consistent rotary position encoding across attention layers
      
      This change improves compatibility with Megatron-based models and provides
      better alignment with the original implementation's positional encoding approach.
      
      * [cogview4][bugfix] apply silu activation to time embeddings in CogView4
      
      Applied silu activation to time embeddings before splitting into conditional
      and unconditional parts in CogView4Transformer2DModel. This matches the
      original implementation and helps ensure correct time conditioning behavior.
      
      * [cogview4][chore] clean up pipeline code
      
      - Remove commented out code and debug statements
      - Remove unused retrieve_timesteps function
      - Clean up code formatting and documentation
      
      This commit focuses on code cleanup in the CogView4 pipeline implementation, removing unnecessary commented code and improving readability without changing functionality.
      
      * [cogview4][scheduler] Implement CogView4 scheduler and pipeline
      
      * now It work
      
      * add timestep
      
      * batch
      
      * change convert scipt
      
      * refactor pt. 1; make style
      
      * refactor pt. 2
      
      * refactor pt. 3
      
      * add tests
      
      * make fix-copies
      
      * update toctree.yml
      
      * use flow match scheduler instead of custom
      
      * remove scheduling_cogview.py
      
      * add tiktoken to test dependencies
      
      * Update src/diffusers/models/embeddings.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * apply suggestions from review
      
      * use diffusers apply_rotary_emb
      
      * update flow match scheduler to accept timesteps
      
      * fix comment
      
      * apply review sugestions
      
      * Update src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      ---------
      Co-authored-by: default avatar三洋三洋 <1258009915@qq.com>
      Co-authored-by: default avatarOleehyO <leehy0357@gmail.com>
      Co-authored-by: default avatarAryan <aryan@huggingface.co>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      d90cd362
    • YiYi Xu's avatar
      follow-up refactor on lumina2 (#10776) · 69f919d8
      YiYi Xu authored
      * up
      69f919d8
  3. 14 Feb, 2025 1 commit
    • Aryan's avatar
      Module Group Offloading (#10503) · 9a147b82
      Aryan authored
      
      
      * update
      
      * fix
      
      * non_blocking; handle parameters and buffers
      
      * update
      
      * Group offloading with cuda stream prefetching (#10516)
      
      * cuda stream prefetch
      
      * remove breakpoints
      
      * update
      
      * copy model hook implementation from pab
      
      * update; ~very workaround based implementation but it seems to work as expected; needs cleanup and rewrite
      
      * more workarounds to make it actually work
      
      * cleanup
      
      * rewrite
      
      * update
      
      * make sure to sync current stream before overwriting with pinned params
      
      not doing so will lead to erroneous computations on the GPU and cause bad results
      
      * better check
      
      * update
      
      * remove hook implementation to not deal with merge conflict
      
      * re-add hook changes
      
      * why use more memory when less memory do trick
      
      * why still use slightly more memory when less memory do trick
      
      * optimise
      
      * add model tests
      
      * add pipeline tests
      
      * update docs
      
      * add layernorm and groupnorm
      
      * address review comments
      
      * improve tests; add docs
      
      * improve docs
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * apply suggestions from code review
      
      * update tests
      
      * apply suggestions from review
      
      * enable_group_offloading -> enable_group_offload for naming consistency
      
      * raise errors if multiple offloading strategies used; add relevant tests
      
      * handle .to() when group offload applied
      
      * refactor some repeated code
      
      * remove unintentional change from merge conflict
      
      * handle .cuda()
      
      ---------
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      9a147b82
  4. 13 Feb, 2025 2 commits
  5. 12 Feb, 2025 3 commits
  6. 11 Feb, 2025 3 commits
  7. 10 Feb, 2025 1 commit
  8. 31 Jan, 2025 1 commit
  9. 28 Jan, 2025 2 commits
  10. 27 Jan, 2025 1 commit
  11. 24 Jan, 2025 1 commit
  12. 22 Jan, 2025 1 commit
    • Aryan's avatar
      [core] Layerwise Upcasting (#10347) · beacaa55
      Aryan authored
      
      
      * update
      
      * update
      
      * make style
      
      * remove dynamo disable
      
      * add coauthor
      Co-Authored-By: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * update
      
      * update
      
      * update
      
      * update mixin
      
      * add some basic tests
      
      * update
      
      * update
      
      * non_blocking
      
      * improvements
      
      * update
      
      * norm.* -> norm
      
      * apply suggestions from review
      
      * add example
      
      * update hook implementation to the latest changes from pyramid attention broadcast
      
      * deinitialize should raise an error
      
      * update doc page
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * update docs
      
      * update
      
      * refactor
      
      * fix _always_upcast_modules for asym ae and vq_model
      
      * fix lumina embedding forward to not depend on weight dtype
      
      * refactor tests
      
      * add simple lora inference tests
      
      * _always_upcast_modules -> _precision_sensitive_module_patterns
      
      * remove todo comments about review; revert changes to self.dtype in unets because .dtype on ModelMixin should be able to handle fp8 weight case
      
      * check layer dtypes in lora test
      
      * fix UNet1DModelTests::test_layerwise_upcasting_inference
      
      * _precision_sensitive_module_patterns -> _skip_layerwise_casting_patterns based on feedback
      
      * skip test in NCSNppModelTests
      
      * skip tests for AutoencoderTinyTests
      
      * skip tests for AutoencoderOobleckTests
      
      * skip tests for UNet1DModelTests - unsupported pytorch operations
      
      * layerwise_upcasting -> layerwise_casting
      
      * skip tests for UNetRLModelTests; needs next pytorch release for currently unimplemented operation support
      
      * add layerwise fp8 pipeline test
      
      * use xfail
      
      * Apply suggestions from code review
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * add assertion with fp32 comparison; add tolerance to fp8-fp32 vs fp32-fp32 comparison (required for a few models' test to pass)
      
      * add note about memory consumption on tesla CI runner for failing test
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      beacaa55
  13. 21 Jan, 2025 2 commits
  14. 20 Jan, 2025 1 commit
  15. 19 Jan, 2025 1 commit
  16. 16 Jan, 2025 4 commits
  17. 14 Jan, 2025 2 commits
  18. 13 Jan, 2025 1 commit
  19. 11 Jan, 2025 1 commit
    • Junyu Chen's avatar
      [DC-AE] support tiling for DC-AE (#10510) · e7db062e
      Junyu Chen authored
      
      
      * autoencoder_dc tiling
      
      * add tiling and slicing support in SANA pipelines
      
      * create variables for padding length because the line becomes too long
      
      * add tiling and slicing support in pag SANA pipelines
      
      * revert changes to tile size
      
      * make style
      
      * add vae tiling test
      
      ---------
      Co-authored-by: default avatarAryan <aryan@huggingface.co>
      e7db062e
  20. 10 Jan, 2025 1 commit
    • Daniel Hipke's avatar
      Add a `disable_mmap` option to the `from_single_file` loader to improve load... · 52c05bd4
      Daniel Hipke authored
      
      Add a `disable_mmap` option to the `from_single_file` loader to improve load performance on network mounts (#10305)
      
      * Add no_mmap arg.
      
      * Fix arg parsing.
      
      * Update another method to force no mmap.
      
      * logging
      
      * logging2
      
      * propagate no_mmap
      
      * logging3
      
      * propagate no_mmap
      
      * logging4
      
      * fix open call
      
      * clean up logging
      
      * cleanup
      
      * fix missing arg
      
      * update logging and comments
      
      * Rename to disable_mmap and update other references.
      
      * [Docs] Update ltx_video.md to remove generator from `from_pretrained()` (#10316)
      
      Update ltx_video.md to remove generator from `from_pretrained()`
      
      * docs: fix a mistake in docstring (#10319)
      
      Update pipeline_hunyuan_video.py
      
      docs: fix a mistake
      
      * [BUG FIX] [Stable Audio Pipeline] Resolve torch.Tensor.new_zeros() TypeError in function prepare_latents caused by audio_vae_length (#10306)
      
      [BUG FIX] [Stable Audio Pipeline] TypeError: new_zeros(): argument 'size' failed to unpack the object at pos 3 with error "type must be tuple of ints,but got float"
      
      torch.Tensor.new_zeros() takes a single argument size (int...) – a list, tuple, or torch.Size of integers defining the shape of the output tensor.
      
      in function prepare_latents:
      audio_vae_length = self.transformer.config.sample_size * self.vae.hop_length
      audio_shape = (batch_size // num_waveforms_per_prompt, audio_channels, audio_vae_length)
      ...
      audio = initial_audio_waveforms.new_zeros(audio_shape)
      
      audio_vae_length evaluates to float because self.transformer.config.sample_size returns a float
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * [docs] Fix quantization links (#10323)
      
      Update overview.md
      
      * [Sana]add 2K related model for Sana (#10322)
      
      add 2K related model for Sana
      
      * Update src/diffusers/loaders/single_file_model.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * Update src/diffusers/loaders/single_file.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * make style
      
      ---------
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarLeojc <liao_junchao@outlook.com>
      Co-authored-by: default avatarAditya Raj <syntaxticsugr@gmail.com>
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      Co-authored-by: default avatarJunsong Chen <cjs1020440147@icloud.com>
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      52c05bd4
  21. 09 Jan, 2025 1 commit
  22. 08 Jan, 2025 4 commits
  23. 07 Jan, 2025 1 commit
  24. 06 Jan, 2025 2 commits
    • Ameer Azam's avatar
      Regarding the RunwayML path for V1.5 did change to... · 4f5e3e35
      Ameer Azam authored
      Regarding the RunwayML path for V1.5 did change to stable-diffusion-v1-5/[stable-diffusion-v1-5/ stable-diffusion-inpainting] (#10476)
      
      * Update pipeline_controlnet.py
      
      * Update pipeline_controlnet_img2img.py
      
      runwayml Take-down so change all from to this
      stable-diffusion-v1-5/stable-diffusion-v1-5
      
      * Update pipeline_controlnet_inpaint.py
      
      * runwayml take-down make change to sd-legacy
      
      * runwayml take-down make change to sd-legacy
      
      * runwayml take-down make change to sd-legacy
      
      * runwayml take-down make change to sd-legacy
      
      * Update convert_blipdiffusion_to_diffusers.py
      
      style change
      4f5e3e35
    • Aryan's avatar
      Fix hunyuan video attention mask dim (#10454) · 7747b588
      Aryan authored
      
      
      * fix
      
      * add coauthor
      Co-Authored-By: default avatarNerogar <nerogar@arcor.de>
      
      ---------
      Co-authored-by: default avatarNerogar <nerogar@arcor.de>
      7747b588