1. 22 Jan, 2025 1 commit
    • Aryan's avatar
      [core] Layerwise Upcasting (#10347) · beacaa55
      Aryan authored
      
      
      * update
      
      * update
      
      * make style
      
      * remove dynamo disable
      
      * add coauthor
      Co-Authored-By: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * update
      
      * update
      
      * update
      
      * update mixin
      
      * add some basic tests
      
      * update
      
      * update
      
      * non_blocking
      
      * improvements
      
      * update
      
      * norm.* -> norm
      
      * apply suggestions from review
      
      * add example
      
      * update hook implementation to the latest changes from pyramid attention broadcast
      
      * deinitialize should raise an error
      
      * update doc page
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * update docs
      
      * update
      
      * refactor
      
      * fix _always_upcast_modules for asym ae and vq_model
      
      * fix lumina embedding forward to not depend on weight dtype
      
      * refactor tests
      
      * add simple lora inference tests
      
      * _always_upcast_modules -> _precision_sensitive_module_patterns
      
      * remove todo comments about review; revert changes to self.dtype in unets because .dtype on ModelMixin should be able to handle fp8 weight case
      
      * check layer dtypes in lora test
      
      * fix UNet1DModelTests::test_layerwise_upcasting_inference
      
      * _precision_sensitive_module_patterns -> _skip_layerwise_casting_patterns based on feedback
      
      * skip test in NCSNppModelTests
      
      * skip tests for AutoencoderTinyTests
      
      * skip tests for AutoencoderOobleckTests
      
      * skip tests for UNet1DModelTests - unsupported pytorch operations
      
      * layerwise_upcasting -> layerwise_casting
      
      * skip tests for UNetRLModelTests; needs next pytorch release for currently unimplemented operation support
      
      * add layerwise fp8 pipeline test
      
      * use xfail
      
      * Apply suggestions from code review
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * add assertion with fp32 comparison; add tolerance to fp8-fp32 vs fp32-fp32 comparison (required for a few models' test to pass)
      
      * add note about memory consumption on tesla CI runner for failing test
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      beacaa55
  2. 16 Dec, 2024 1 commit
    • Aryan's avatar
      [core] Hunyuan Video (#10136) · aace1f41
      Aryan authored
      
      
      * copy transformer
      
      * copy vae
      
      * copy pipeline
      
      * make fix-copies
      
      * refactor; make original code work with diffusers; test latents for comparison generated with this commit
      
      * move rope into pipeline; remove flash attention; refactor
      
      * begin conversion script
      
      * make style
      
      * refactor attention
      
      * refactor
      
      * refactor final layer
      
      * their mlp -> our feedforward
      
      * make style
      
      * add docs
      
      * refactor layer names
      
      * refactor modulation
      
      * cleanup
      
      * refactor norms
      
      * refactor activations
      
      * refactor single blocks attention
      
      * refactor attention processor
      
      * make style
      
      * cleanup a bit
      
      * refactor double transformer block attention
      
      * update mochi attn proc
      
      * use diffusers attention implementation in all modules; checkpoint for all values matching original
      
      * remove helper functions in vae
      
      * refactor upsample
      
      * refactor causal conv
      
      * refactor resnet
      
      * refactor
      
      * refactor
      
      * refactor
      
      * grad checkpointing
      
      * autoencoder test
      
      * fix scaling factor
      
      * refactor clip
      
      * refactor llama text encoding
      
      * add coauthor
      Co-Authored-By: default avatar"Gregory D. Hunkins" <greg@ollano.com>
      
      * refactor rope; diff: 0.14990234375; reason and fix: create rope grid on cpu and move to device
      
      Note: The following line diverges from original behaviour. We create the grid on the device, whereas
      original implementation creates it on CPU and then moves it to device. This results in numerical
      differences in layerwise debugging outputs, but visually it is the same.
      
      * use diffusers timesteps embedding; diff: 0.10205078125
      
      * rename
      
      * convert
      
      * update
      
      * add tests for transformer
      
      * add pipeline tests; text encoder 2 is not optional
      
      * fix attention implementation for torch
      
      * add example
      
      * update docs
      
      * update docs
      
      * apply suggestions from review
      
      * refactor vae
      
      * update
      
      * Apply suggestions from code review
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * Update src/diffusers/pipelines/hunyuan_video/pipeline_hunyuan_video.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * make fix-copies
      
      * update
      
      ---------
      Co-authored-by: default avatar"Gregory D. Hunkins" <greg@ollano.com>
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      aace1f41