"vscode:/vscode.git/clone" did not exist on "26941fa3777618dfc659e638e524b65f22dd32a6"
  1. 09 Apr, 2025 2 commits
  2. 02 Apr, 2025 2 commits
  3. 21 Mar, 2025 1 commit
    • Aryan's avatar
      [core] FasterCache (#10163) · 844221ae
      Aryan authored
      
      
      * init
      
      * update
      
      * update
      
      * update
      
      * make style
      
      * update
      
      * fix
      
      * make it work with guidance distilled models
      
      * update
      
      * make fix-copies
      
      * add tests
      
      * update
      
      * apply_faster_cache -> apply_fastercache
      
      * fix
      
      * reorder
      
      * update
      
      * refactor
      
      * update docs
      
      * add fastercache to CacheMixin
      
      * update tests
      
      * Apply suggestions from code review
      
      * make style
      
      * try to fix partial import error
      
      * Apply style fixes
      
      * raise warning
      
      * update
      
      ---------
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      844221ae
  4. 20 Mar, 2025 1 commit
  5. 25 Feb, 2025 1 commit
  6. 20 Feb, 2025 1 commit
    • Sayak Paul's avatar
      [tests] test `encode_prompt()` in isolation (#10438) · b2ca39c8
      Sayak Paul authored
      * poc encode_prompt() tests
      
      * fix
      
      * updates.
      
      * fixes
      
      * fixes
      
      * updates
      
      * updates
      
      * updates
      
      * revert
      
      * updates
      
      * updates
      
      * updates
      
      * updates
      
      * remove SDXLOptionalComponentsTesterMixin.
      
      * remove tests that directly leveraged encode_prompt() in some way or the other.
      
      * fix imports.
      
      * remove _save_load
      
      * fixes
      
      * fixes
      
      * fixes
      
      * fixes
      b2ca39c8
  7. 14 Feb, 2025 1 commit
    • Aryan's avatar
      Module Group Offloading (#10503) · 9a147b82
      Aryan authored
      
      
      * update
      
      * fix
      
      * non_blocking; handle parameters and buffers
      
      * update
      
      * Group offloading with cuda stream prefetching (#10516)
      
      * cuda stream prefetch
      
      * remove breakpoints
      
      * update
      
      * copy model hook implementation from pab
      
      * update; ~very workaround based implementation but it seems to work as expected; needs cleanup and rewrite
      
      * more workarounds to make it actually work
      
      * cleanup
      
      * rewrite
      
      * update
      
      * make sure to sync current stream before overwriting with pinned params
      
      not doing so will lead to erroneous computations on the GPU and cause bad results
      
      * better check
      
      * update
      
      * remove hook implementation to not deal with merge conflict
      
      * re-add hook changes
      
      * why use more memory when less memory do trick
      
      * why still use slightly more memory when less memory do trick
      
      * optimise
      
      * add model tests
      
      * add pipeline tests
      
      * update docs
      
      * add layernorm and groupnorm
      
      * address review comments
      
      * improve tests; add docs
      
      * improve docs
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * apply suggestions from code review
      
      * update tests
      
      * apply suggestions from review
      
      * enable_group_offloading -> enable_group_offload for naming consistency
      
      * raise errors if multiple offloading strategies used; add relevant tests
      
      * handle .to() when group offload applied
      
      * refactor some repeated code
      
      * remove unintentional change from merge conflict
      
      * handle .cuda()
      
      ---------
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      9a147b82
  8. 27 Jan, 2025 1 commit
  9. 22 Jan, 2025 1 commit
    • Aryan's avatar
      [core] Layerwise Upcasting (#10347) · beacaa55
      Aryan authored
      
      
      * update
      
      * update
      
      * make style
      
      * remove dynamo disable
      
      * add coauthor
      Co-Authored-By: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * update
      
      * update
      
      * update
      
      * update mixin
      
      * add some basic tests
      
      * update
      
      * update
      
      * non_blocking
      
      * improvements
      
      * update
      
      * norm.* -> norm
      
      * apply suggestions from review
      
      * add example
      
      * update hook implementation to the latest changes from pyramid attention broadcast
      
      * deinitialize should raise an error
      
      * update doc page
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * update docs
      
      * update
      
      * refactor
      
      * fix _always_upcast_modules for asym ae and vq_model
      
      * fix lumina embedding forward to not depend on weight dtype
      
      * refactor tests
      
      * add simple lora inference tests
      
      * _always_upcast_modules -> _precision_sensitive_module_patterns
      
      * remove todo comments about review; revert changes to self.dtype in unets because .dtype on ModelMixin should be able to handle fp8 weight case
      
      * check layer dtypes in lora test
      
      * fix UNet1DModelTests::test_layerwise_upcasting_inference
      
      * _precision_sensitive_module_patterns -> _skip_layerwise_casting_patterns based on feedback
      
      * skip test in NCSNppModelTests
      
      * skip tests for AutoencoderTinyTests
      
      * skip tests for AutoencoderOobleckTests
      
      * skip tests for UNet1DModelTests - unsupported pytorch operations
      
      * layerwise_upcasting -> layerwise_casting
      
      * skip tests for UNetRLModelTests; needs next pytorch release for currently unimplemented operation support
      
      * add layerwise fp8 pipeline test
      
      * use xfail
      
      * Apply suggestions from code review
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * add assertion with fp32 comparison; add tolerance to fp8-fp32 vs fp32-fp32 comparison (required for a few models' test to pass)
      
      * add note about memory consumption on tesla CI runner for failing test
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      beacaa55
  10. 14 Jan, 2025 1 commit
    • Marc Sun's avatar
      [FEAT] DDUF format (#10037) · fbff43ac
      Marc Sun authored
      
      
      * load and save dduf archive
      
      * style
      
      * switch to zip uncompressed
      
      * updates
      
      * Update src/diffusers/pipelines/pipeline_utils.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update src/diffusers/pipelines/pipeline_utils.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * first draft
      
      * remove print
      
      * switch to dduf_file for consistency
      
      * switch to huggingface hub api
      
      * fix log
      
      * add a basic test
      
      * Update src/diffusers/configuration_utils.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update src/diffusers/pipelines/pipeline_utils.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * Update src/diffusers/pipelines/pipeline_utils.py
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * fix
      
      * fix variant
      
      * change saving logic
      
      * DDUF - Load transformers components manually (#10171)
      
      * update hfh version
      
      * Load transformers components manually
      
      * load encoder from_pretrained with state_dict
      
      * working version with transformers and tokenizer !
      
      * add generation_config case
      
      * fix tests
      
      * remove saving for now
      
      * typing
      
      * need next version from transformers
      
      * Update src/diffusers/configuration_utils.py
      Co-authored-by: default avatarLucain <lucain@huggingface.co>
      
      * check path corectly
      
      * Apply suggestions from code review
      Co-authored-by: default avatarLucain <lucain@huggingface.co>
      
      * udapte
      
      * typing
      
      * remove check for subfolder
      
      * quality
      
      * revert setup changes
      
      * oups
      
      * more readable condition
      
      * add loading from the hub test
      
      * add basic docs.
      
      * Apply suggestions from code review
      Co-authored-by: default avatarLucain <lucain@huggingface.co>
      
      * add example
      
      * add
      
      * make functions private
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * minor.
      
      * fixes
      
      * fix
      
      * change the precdence of parameterized.
      
      * error out when custom pipeline is passed with dduf_file.
      
      * updates
      
      * fix
      
      * updates
      
      * fixes
      
      * updates
      
      * fix xfail condition.
      
      * fix xfail
      
      * fixes
      
      * sharded checkpoint compat
      
      * add test for sharded checkpoint
      
      * add suggestions
      
      * Update src/diffusers/models/model_loading_utils.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * from suggestions
      
      * add class attributes to flag dduf tests
      
      * last one
      
      * fix logic
      
      * remove comment
      
      * revert changes
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarLucain <lucain@huggingface.co>
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      fbff43ac
  11. 09 Jan, 2025 1 commit
  12. 21 Dec, 2024 1 commit
    • hlky's avatar
      Support Flux IP Adapter (#10261) · be207099
      hlky authored
      
      
      * Flux IP-Adapter
      
      * test cfg
      
      * make style
      
      * temp remove copied from
      
      * fix test
      
      * fix test
      
      * v2
      
      * fix
      
      * make style
      
      * temp remove copied from
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Move encoder_hid_proj to inside FluxTransformer2DModel
      
      * merge
      
      * separate encode_prompt, add copied from, image_encoder offload
      
      * make
      
      * fix test
      
      * fix
      
      * Update src/diffusers/pipelines/flux/pipeline_flux.py
      
      * test_flux_prompt_embeds change not needed
      
      * true_cfg -> true_cfg_scale
      
      * fix merge conflict
      
      * test_flux_ip_adapter_inference
      
      * add fast test
      
      * FluxIPAdapterMixin not test mixin
      
      * Update pipeline_flux.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      be207099
  13. 04 Dec, 2024 1 commit
    • Sayak Paul's avatar
      [tests] refactor vae tests (#9808) · c1926cef
      Sayak Paul authored
      
      
      * add: autoencoderkl tests
      
      * autoencodertiny.
      
      * fix
      
      * asymmetric autoencoder.
      
      * more
      
      * integration tests for stable audio decoder.
      
      * consistency decoder vae tests
      
      * remove grad check from consistency decoder.
      
      * cog
      
      * bye test_models_vae.py
      
      * fix
      
      * fix
      
      * remove allegro
      
      * fixes
      
      * fixes
      
      * fixes
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      c1926cef
  14. 22 Nov, 2024 1 commit
    • Fanli Lin's avatar
      make `pipelines` tests device-agnostic (part1) (#9399) · 64b3e0f5
      Fanli Lin authored
      
      
      * enable on xpu
      
      * add 1 more
      
      * add one more
      
      * enable more
      
      * add 1 more
      
      * add more
      
      * enable 1
      
      * enable more cases
      
      * enable
      
      * enable
      
      * update comment
      
      * one more
      
      * enable 1
      
      * add more cases
      
      * enable xpu
      
      * add one more caswe
      
      * add more cases
      
      * add 1
      
      * add more
      
      * add more cases
      
      * add case
      
      * enable
      
      * add more
      
      * add more
      
      * add more
      
      * enbale more
      
      * add more
      
      * update code
      
      * update test marker
      
      * add skip back
      
      * update comment
      
      * remove single files
      
      * remove
      
      * style
      
      * add
      
      * revert
      
      * reformat
      
      * update decorator
      
      * update
      
      * update
      
      * update
      
      * Update tests/pipelines/deepfloyd_if/test_if.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * Update src/diffusers/utils/testing_utils.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * Update tests/pipelines/animatediff/test_animatediff_controlnet.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * Update tests/pipelines/animatediff/test_animatediff.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * Update tests/pipelines/animatediff/test_animatediff_controlnet.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * update float16
      
      * no unitest.skipt
      
      * update
      
      * apply style check
      
      * reapply format
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      64b3e0f5
  15. 07 Nov, 2024 1 commit
    • Sayak Paul's avatar
      [Core] introduce `controlnet` module (#8768) · ded3db16
      Sayak Paul authored
      
      
      * move vae flax module.
      
      * controlnet module.
      
      * prepare for PR.
      
      * revert a commit
      
      * gracefully deprecate controlnet deps.
      
      * fix
      
      * fix doc path
      
      * fix-copies
      
      * fix path
      
      * style
      
      * style
      
      * conflicts
      
      * fix
      
      * fix-copies
      
      * sparsectrl.
      
      * updates
      
      * fix
      
      * updates
      
      * updates
      
      * updates
      
      * fix
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      ded3db16
  16. 31 Oct, 2024 2 commits
  17. 29 Oct, 2024 1 commit
  18. 28 Sep, 2024 1 commit
    • Sayak Paul's avatar
      [Core] fix variant-identification. (#9253) · 11542431
      Sayak Paul authored
      
      
      * fix variant-idenitification.
      
      * fix variant
      
      * fix sharded variant checkpoint loading.
      
      * Apply suggestions from code review
      
      * fixes.
      
      * more fixes.
      
      * remove print.
      
      * fixes
      
      * fixes
      
      * comments
      
      * fixes
      
      * apply suggestions.
      
      * hub_utils.py
      
      * fix test
      
      * updates
      
      * fixes
      
      * fixes
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * updates.
      
      * removep patch file.
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      11542431
  19. 03 Sep, 2024 1 commit
    • Aryan's avatar
      [tests] remove/speedup some low signal tests (#9285) · 24053832
      Aryan authored
      * remove 2 shapes from SDFunctionTesterMixin::test_vae_tiling
      
      * combine freeu enable/disable test to reduce many inference runs
      
      * remove low signal unet test for signature
      
      * remove low signal embeddings test
      
      * remove low signal progress bar test from PipelineTesterMixin
      
      * combine ip-adapter single and multi tests to save many inferences
      
      * fix broken tests
      
      * Update tests/pipelines/test_pipelines_common.py
      
      * Update tests/pipelines/test_pipelines_common.py
      
      * add progress bar tests
      24053832
  20. 07 Aug, 2024 1 commit
    • Álvaro Somoza's avatar
      [Kolors] Add PAG (#8934) · 39e1f7ea
      Álvaro Somoza authored
      
      
      * txt2img pag added
      
      * autopipe added, fixed case
      
      * style
      
      * apply suggestions
      
      * added fast tests, added todo tests
      
      * revert dummy objects for kolors
      
      * fix pag dummies
      
      * fix test imports
      
      * update pag tests
      
      * add kolor pag to docs
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      39e1f7ea
  21. 24 Jul, 2024 1 commit
    • Sayak Paul's avatar
      [Core] fix QKV fusion for attention (#8829) · 50d21f7c
      Sayak Paul authored
      * start debugging the problem,
      
      * start
      
      * fix
      
      * fix
      
      * fix imports.
      
      * handle hunyuan
      
      * remove residuals.
      
      * add a check for making sure there's appropriate procs.
      
      * add more rigor to the tests.
      
      * fix test
      
      * remove redundant check
      
      * fix-copies
      
      * move check_qkv_fusion_matches_attn_procs_length and check_qkv_fusion_processors_exist.
      50d21f7c
  22. 20 Jul, 2024 1 commit
  23. 28 Jun, 2024 1 commit
  24. 08 May, 2024 1 commit
  25. 01 May, 2024 1 commit
  26. 24 Apr, 2024 1 commit
  27. 19 Apr, 2024 1 commit
  28. 16 Apr, 2024 1 commit
    • UmerHA's avatar
      Fixing implementation of ControlNet-XS (#6772) · fda1531d
      UmerHA authored
      
      
      * CheckIn - created DownSubBlocks
      
      * Added extra channels, implemented subblock fwd
      
      * Fixed connection sizes
      
      * checkin
      
      * Removed iter, next in forward
      
      * Models for SD21 & SDXL run through
      
      * Added back pipelines, cleared up connections
      
      * Cleaned up connection creation
      
      * added debug logs
      
      * updated logs
      
      * logs: added input loading
      
      * Update umer_debug_logger.py
      
      * log: Loading hint
      
      * Update umer_debug_logger.py
      
      * added logs
      
      * Changed debug logging
      
      * debug: added more logs
      
      * Fixed num_norm_groups
      
      * Debug: Logging all of SDXL input
      
      * Update umer_debug_logger.py
      
      * debug: updated logs
      
      * checkim
      
      * Readded tests
      
      * Removed debug logs
      
      * Fixed Slow Tests
      
      * Added value ckecks | Updated model_cpu_offload_seq
      
      * accelerate-offloading works ; fast tests work
      
      * Made unet & addon explicit in controlnet
      
      * Updated slow tests
      
      * Added dtype/device to ControlNetXS
      
      * Filled in test model paths
      
      * Added image_encoder/feature_extractor to XL pipe
      
      * Fixed fast tests
      
      * Added comments and docstrings
      
      * Fixed copies
      
      * Added docs ; Updates slow tests
      
      * Moved changes to UNetMidBlock2DCrossAttn
      
      * tiny cleanups
      
      * Removed stray prints
      
      * Removed ip adapters + freeU
      
      - Removed ip adapters + freeU as they don't make sense for ControlNet-XS
      - Fixed imports of UNet components
      
      * Fixed test_save_load_float16
      
      * Make style, quality, fix-copies
      
      * Changed loading/saving API for ControlNetXS
      
      - Changed loading/saving API for ControlNetXS
      - other small fixes
      
      * Removed ControlNet-XS from research examples
      
      * Make style, quality, fix-copies
      
      * Small fixes
      
      - deleted ControlNetXSModel.init_original
      - added time_embedding_mix to StableDiffusionControlNetXSPipeline .from_pretrained / StableDiffusionXLControlNetXSPipeline.from_pretrained
      - fixed copy hints
      
      * checkin May 11 '23
      
      * CheckIn Mar 12 '24
      
      * Fixed tests for SD
      
      * Added tests for UNetControlNetXSModel
      
      * Fixed SDXL tests
      
      * cleanup
      
      * Delete Pipfile
      
      * CheckIn Mar 20
      
      Started replacing sub blocks  by `ControlNetXSCrossAttnDownBlock2D` and `ControlNetXSCrossAttnUplock2D`
      
      * check-in Mar 23
      
      * checkin 24 Mar
      
      * Created init for UNetCnxs and CnxsAddon
      
      * CheckIn
      
      * Made from_modules, from_unet and no_control work
      
      * make style,quality,fix-copies & small changes
      
      * Fixed freezing
      
      * Added gradient ckpt'ing; fixed tests
      
      * Fix slow tests(+compile) ; clear naming confusion
      
      * Don't create UNet in init ; removed class_emb
      
      * Incorporated review feedback
      
      - Deleted get_base_pipeline /  get_controlnet_addon for pipes
      - Pipes inherit from StableDiffusionXLPipeline
      - Made module dicts for cnxs-addon's down/mid/up classes
      - Added support for qkv fusion and freeU
      
      * Make style, quality, fix-copies
      
      * Implemented review feedback
      
      * Removed compatibility check for vae/ctrl embedding
      
      * make style, quality, fix-copies
      
      * Delete Pipfile
      
      * Integrated review feedback
      
      - Importing ControlNetConditioningEmbedding now
      - get_down/mid/up_block_addon now outside class
      - renamed `do_control` to `apply_control`
      
      * Reduced size of test tensors
      
      For this, added `norm_num_groups` as parameter everywhere
      
      * Renamed cnxs-`Addon` to cnxs-`Adapter`
      
      - `ControlNetXSAddon` -> `ControlNetXSAdapter`
      - `ControlNetXSAddonDownBlockComponents` -> `DownBlockControlNetXSAdapter`, and similarly for mid/up
      - `get_mid_block_addon` -> `get_mid_block_adapter`, and similarly for mid/up
      
      * Fixed save_pretrained/from_pretrained bug
      
      * Removed redundant code
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      fda1531d
  29. 09 Apr, 2024 1 commit
  30. 04 Apr, 2024 1 commit
    • UmerHA's avatar
      Skip `test_freeu_enabled ` on MPS (#7570) · 71f49a5d
      UmerHA authored
      * Skip `test_freeu_enabled ` on MPS
      
      * Small fixes
      
      - import skip_mps correctly
      - disable all instances of test_freeu_enabled
      
      * Empty commit to trigger tests
      
      * Empty commit to trigger CI
      71f49a5d
  31. 02 Apr, 2024 2 commits
    • Sayak Paul's avatar
      [Tests] Speed up fast pipelines part II (#7521) · 2b04ec2f
      Sayak Paul authored
      
      
      * start printing the tensors.
      
      * print full throttle
      
      * set static slices for 7 tests.
      
      * remove printing.
      
      * flatten
      
      * disable test for controlnet
      
      * what happens when things are seeded properly?
      
      * set the right value
      
      * style./
      
      * make pia test fail to check things
      
      * print.
      
      * fix pia.
      
      * checking for animatediff.
      
      * fix: animatediff.
      
      * video synthesis
      
      * final piece.
      
      * style.
      
      * print guess.
      
      * fix: assertion for control guess.
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      2b04ec2f
    • Dhruv Nair's avatar
      Fix FreeU tests (#7540) · 5d21d4a2
      Dhruv Nair authored
      update
      5d21d4a2
  32. 01 Apr, 2024 3 commits
  33. 29 Mar, 2024 2 commits
    • Sayak Paul's avatar
      [Tests] Speed up some fast pipeline tests (#7477) · fac76169
      Sayak Paul authored
      * speed up test_vae_slicing in animatediff
      
      * speed up test_karras_schedulers_shape for attend and excite.
      
      * style.
      
      * get the static slices out.
      
      * specify torch print options.
      
      * modify
      
      * test run with controlnet
      
      * specify kwarg
      
      * fix: things
      
      * not None
      
      * flatten
      
      * controlnet img2img
      
      * complete controlet sd
      
      * finish more
      
      * finish more
      
      * finish more
      
      * finish more
      
      * finish the final batch
      
      * add cpu check for expected_pipe_slice.
      
      * finish the rest
      
      * remove print
      
      * style
      
      * fix ssd1b controlnet test
      
      * checking ssd1b
      
      * disable the test.
      
      * make the test_ip_adapter_single controlnet test more robust
      
      * fix: simple inpaint
      
      * multi
      
      * disable panorama
      
      * enable again
      
      * panorama is shaky so leave it for now
      
      * remove print
      
      * raise tolerance.
      fac76169
    • YiYi Xu's avatar
      fix OOM for test_vae_tiling (#7510) · 34c90dbb
      YiYi Xu authored
      use float16 and add torch.no_grad()
      34c90dbb