1. 22 Jan, 2025 1 commit
    • Aryan's avatar
      [core] Layerwise Upcasting (#10347) · beacaa55
      Aryan authored
      
      
      * update
      
      * update
      
      * make style
      
      * remove dynamo disable
      
      * add coauthor
      Co-Authored-By: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * update
      
      * update
      
      * update
      
      * update mixin
      
      * add some basic tests
      
      * update
      
      * update
      
      * non_blocking
      
      * improvements
      
      * update
      
      * norm.* -> norm
      
      * apply suggestions from review
      
      * add example
      
      * update hook implementation to the latest changes from pyramid attention broadcast
      
      * deinitialize should raise an error
      
      * update doc page
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * update docs
      
      * update
      
      * refactor
      
      * fix _always_upcast_modules for asym ae and vq_model
      
      * fix lumina embedding forward to not depend on weight dtype
      
      * refactor tests
      
      * add simple lora inference tests
      
      * _always_upcast_modules -> _precision_sensitive_module_patterns
      
      * remove todo comments about review; revert changes to self.dtype in unets because .dtype on ModelMixin should be able to handle fp8 weight case
      
      * check layer dtypes in lora test
      
      * fix UNet1DModelTests::test_layerwise_upcasting_inference
      
      * _precision_sensitive_module_patterns -> _skip_layerwise_casting_patterns based on feedback
      
      * skip test in NCSNppModelTests
      
      * skip tests for AutoencoderTinyTests
      
      * skip tests for AutoencoderOobleckTests
      
      * skip tests for UNet1DModelTests - unsupported pytorch operations
      
      * layerwise_upcasting -> layerwise_casting
      
      * skip tests for UNetRLModelTests; needs next pytorch release for currently unimplemented operation support
      
      * add layerwise fp8 pipeline test
      
      * use xfail
      
      * Apply suggestions from code review
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * add assertion with fp32 comparison; add tolerance to fp8-fp32 vs fp32-fp32 comparison (required for a few models' test to pass)
      
      * add note about memory consumption on tesla CI runner for failing test
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      beacaa55
  2. 12 Jan, 2025 1 commit
  3. 10 Jan, 2025 1 commit
    • Sayak Paul's avatar
      [LoRA] allow big CUDA tests to run properly for LoRA (and others) (#9845) · a6f043a8
      Sayak Paul authored
      
      
      * allow big lora tests to run on the CI.
      
      * print
      
      * print.
      
      * print
      
      * print
      
      * print
      
      * print
      
      * more
      
      * print
      
      * remove print.
      
      * remove print
      
      * directly place on cuda.
      
      * remove pipeline.
      
      * remove
      
      * fix
      
      * fix
      
      * spaces
      
      * quality
      
      * updates
      
      * directly place flux controlnet pipeline on cuda.
      
      * torch_device instead of cuda.
      
      * style
      
      * device placement.
      
      * fixes
      
      * add big gpu marker for mochi; rename test correctly
      
      * address feedback
      
      * fix
      
      ---------
      Co-authored-by: default avatarAryan <aryan@huggingface.co>
      a6f043a8
  4. 21 Dec, 2024 1 commit
    • hlky's avatar
      Support Flux IP Adapter (#10261) · be207099
      hlky authored
      
      
      * Flux IP-Adapter
      
      * test cfg
      
      * make style
      
      * temp remove copied from
      
      * fix test
      
      * fix test
      
      * v2
      
      * fix
      
      * make style
      
      * temp remove copied from
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Move encoder_hid_proj to inside FluxTransformer2DModel
      
      * merge
      
      * separate encode_prompt, add copied from, image_encoder offload
      
      * make
      
      * fix test
      
      * fix
      
      * Update src/diffusers/pipelines/flux/pipeline_flux.py
      
      * test_flux_prompt_embeds change not needed
      
      * true_cfg -> true_cfg_scale
      
      * fix merge conflict
      
      * test_flux_ip_adapter_inference
      
      * add fast test
      
      * FluxIPAdapterMixin not test mixin
      
      * Update pipeline_flux.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      be207099
  5. 18 Dec, 2024 1 commit
  6. 23 Nov, 2024 1 commit
  7. 20 Nov, 2024 1 commit
  8. 31 Oct, 2024 3 commits
  9. 11 Sep, 2024 1 commit
  10. 04 Sep, 2024 1 commit
  11. 02 Sep, 2024 1 commit
  12. 23 Aug, 2024 1 commit
  13. 02 Aug, 2024 1 commit
    • Sayak Paul's avatar
      [Flux] allow tests to run (#9050) · 0e460675
      Sayak Paul authored
      * fix tests
      
      * fix
      
      * float64 skip
      
      * remove sample_size.
      
      * remove
      
      * remove more
      
      * default_sample_size.
      
      * credit black forest for flux model.
      
      * skip
      
      * fix: tests
      
      * remove OriginalModelMixin
      
      * add transformer model test
      
      * add: transformer model tests
      0e460675
  14. 01 Aug, 2024 1 commit