1. 16 Apr, 2024 1 commit
    • UmerHA's avatar
      Fixing implementation of ControlNet-XS (#6772) · fda1531d
      UmerHA authored
      
      
      * CheckIn - created DownSubBlocks
      
      * Added extra channels, implemented subblock fwd
      
      * Fixed connection sizes
      
      * checkin
      
      * Removed iter, next in forward
      
      * Models for SD21 & SDXL run through
      
      * Added back pipelines, cleared up connections
      
      * Cleaned up connection creation
      
      * added debug logs
      
      * updated logs
      
      * logs: added input loading
      
      * Update umer_debug_logger.py
      
      * log: Loading hint
      
      * Update umer_debug_logger.py
      
      * added logs
      
      * Changed debug logging
      
      * debug: added more logs
      
      * Fixed num_norm_groups
      
      * Debug: Logging all of SDXL input
      
      * Update umer_debug_logger.py
      
      * debug: updated logs
      
      * checkim
      
      * Readded tests
      
      * Removed debug logs
      
      * Fixed Slow Tests
      
      * Added value ckecks | Updated model_cpu_offload_seq
      
      * accelerate-offloading works ; fast tests work
      
      * Made unet & addon explicit in controlnet
      
      * Updated slow tests
      
      * Added dtype/device to ControlNetXS
      
      * Filled in test model paths
      
      * Added image_encoder/feature_extractor to XL pipe
      
      * Fixed fast tests
      
      * Added comments and docstrings
      
      * Fixed copies
      
      * Added docs ; Updates slow tests
      
      * Moved changes to UNetMidBlock2DCrossAttn
      
      * tiny cleanups
      
      * Removed stray prints
      
      * Removed ip adapters + freeU
      
      - Removed ip adapters + freeU as they don't make sense for ControlNet-XS
      - Fixed imports of UNet components
      
      * Fixed test_save_load_float16
      
      * Make style, quality, fix-copies
      
      * Changed loading/saving API for ControlNetXS
      
      - Changed loading/saving API for ControlNetXS
      - other small fixes
      
      * Removed ControlNet-XS from research examples
      
      * Make style, quality, fix-copies
      
      * Small fixes
      
      - deleted ControlNetXSModel.init_original
      - added time_embedding_mix to StableDiffusionControlNetXSPipeline .from_pretrained / StableDiffusionXLControlNetXSPipeline.from_pretrained
      - fixed copy hints
      
      * checkin May 11 '23
      
      * CheckIn Mar 12 '24
      
      * Fixed tests for SD
      
      * Added tests for UNetControlNetXSModel
      
      * Fixed SDXL tests
      
      * cleanup
      
      * Delete Pipfile
      
      * CheckIn Mar 20
      
      Started replacing sub blocks  by `ControlNetXSCrossAttnDownBlock2D` and `ControlNetXSCrossAttnUplock2D`
      
      * check-in Mar 23
      
      * checkin 24 Mar
      
      * Created init for UNetCnxs and CnxsAddon
      
      * CheckIn
      
      * Made from_modules, from_unet and no_control work
      
      * make style,quality,fix-copies & small changes
      
      * Fixed freezing
      
      * Added gradient ckpt'ing; fixed tests
      
      * Fix slow tests(+compile) ; clear naming confusion
      
      * Don't create UNet in init ; removed class_emb
      
      * Incorporated review feedback
      
      - Deleted get_base_pipeline /  get_controlnet_addon for pipes
      - Pipes inherit from StableDiffusionXLPipeline
      - Made module dicts for cnxs-addon's down/mid/up classes
      - Added support for qkv fusion and freeU
      
      * Make style, quality, fix-copies
      
      * Implemented review feedback
      
      * Removed compatibility check for vae/ctrl embedding
      
      * make style, quality, fix-copies
      
      * Delete Pipfile
      
      * Integrated review feedback
      
      - Importing ControlNetConditioningEmbedding now
      - get_down/mid/up_block_addon now outside class
      - renamed `do_control` to `apply_control`
      
      * Reduced size of test tensors
      
      For this, added `norm_num_groups` as parameter everywhere
      
      * Renamed cnxs-`Addon` to cnxs-`Adapter`
      
      - `ControlNetXSAddon` -> `ControlNetXSAdapter`
      - `ControlNetXSAddonDownBlockComponents` -> `DownBlockControlNetXSAdapter`, and similarly for mid/up
      - `get_mid_block_addon` -> `get_mid_block_adapter`, and similarly for mid/up
      
      * Fixed save_pretrained/from_pretrained bug
      
      * Removed redundant code
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      fda1531d
  2. 15 Apr, 2024 2 commits
  3. 13 Apr, 2024 2 commits
  4. 12 Apr, 2024 1 commit
  5. 11 Apr, 2024 7 commits
  6. 10 Apr, 2024 9 commits
  7. 09 Apr, 2024 3 commits
  8. 08 Apr, 2024 3 commits
  9. 05 Apr, 2024 2 commits
    • YiYi Xu's avatar
      [IF| add set_begin_index for all IF pipelines (#7577) · 6133d98f
      YiYi Xu authored
      add set_begin_index for all if pipelines
      6133d98f
    • Sayak Paul's avatar
      [Tests] reduce block sizes of UNet and VAE tests (#7560) · 1c60e094
      Sayak Paul authored
      * reduce block sizes for unet1d.
      
      * reduce blocks for unet_2d.
      
      * reduce block size for unet_motion
      
      * increase channels.
      
      * correctly increase channels.
      
      * reduce number of layers in unet2dconditionmodel tests.
      
      * reduce block sizes for unet2dconditionmodel tests
      
      * reduce block sizes for unet3dconditionmodel.
      
      * fix: test_feed_forward_chunking
      
      * fix: test_forward_with_norm_groups
      
      * skip spatiotemporal tests on MPS.
      
      * reduce block size in AutoencoderKL.
      
      * reduce block sizes for vqmodel.
      
      * further reduce block size.
      
      * make style.
      
      * Empty-Commit
      
      * reduce sizes for ConsistencyDecoderVAETests
      
      * further reduction.
      
      * further block reductions in AutoencoderKL and AssymetricAutoencoderKL.
      
      * massively reduce the block size in unet2dcontionmodel.
      
      * reduce sizes for unet3d
      
      * fix tests in unet3d.
      
      * reduce blocks further in motion unet.
      
      * fix: output shape
      
      * add attention_head_dim to the test configuration.
      
      * remove unexpected keyword arg
      
      * up a bit.
      
      * groups.
      
      * up again
      
      * fix
      1c60e094
  10. 04 Apr, 2024 1 commit
    • UmerHA's avatar
      Skip `test_freeu_enabled ` on MPS (#7570) · 71f49a5d
      UmerHA authored
      * Skip `test_freeu_enabled ` on MPS
      
      * Small fixes
      
      - import skip_mps correctly
      - disable all instances of test_freeu_enabled
      
      * Empty commit to trigger tests
      
      * Empty commit to trigger CI
      71f49a5d
  11. 03 Apr, 2024 5 commits
    • Abhinav Gopal's avatar
      Update pipeline_animatediff_video2video.py (#7457) · 35db2fde
      Abhinav Gopal authored
      * Update pipeline_animatediff_video2video.py
      
      * commit with test for whether latent input can be passed into animatediffvid2vid
      35db2fde
    • Sayak Paul's avatar
      [Chore] increase number of workers for the tests. (#7558) · ad55ce61
      Sayak Paul authored
      * increase number of workers for the tests.
      
      * move to beefier runner.
      
      * improve the fast push tests too.
      
      * use a beefy machine for pytorch pipeline tests
      
      * up the number of workers further.
      ad55ce61
    • Sayak Paul's avatar
      [Core] refactor transformers 2d into multiple init variants. (#7491) · a9a5b14f
      Sayak Paul authored
      * refactor transformers 2d into multiple legacy variants.
      
      * fix: init.
      
      * fix recursive init.
      
      * add inits.
      
      * make transformer block creation more modular.
      
      * complete refactor.
      
      * remove forward
      
      * debug
      
      * remove legacy blocks and refactor within the module itself.
      
      * remove print
      
      * guard caption projection
      
      * remove fetcher.
      
      * reduce the number of args.
      
      * fix: norm_type
      
      * group variables that are shared.
      
      * remove _get_transformer_blocks
      
      * harmonize the init function signatures.
      
      * transformer_blocks to common
      
      * repeat .
      a9a5b14f
    • Beinsezii's avatar
      UniPC Multistep add `rescale_betas_zero_snr` (#7531) · aa190259
      Beinsezii authored
      * UniPC Multistep add `rescale_betas_zero_snr`
      
      Same patch as DPM and Euler with the patched final alpha cumprod
      
      BF16 doesn't seem to break down, I think cause UniPC upcasts during some
      phases already? We could still force an upcast since it only
      loses ≈ 0.005 it/s for me but the difference in output is very small. A
      better endeavor might upcasting in step() and removing all the other
      upcasts elsewhere?
      
      * UniPC ZSNR UT
      
      * Re-add `rescale_betas_zsnr` doc oops
      aa190259
    • Beinsezii's avatar
      UniPC Multistep fix tensor dtype/device on order=3 (#7532) · 19ab04ff
      Beinsezii authored
      * UniPC UTs iterate solvers on FP16
      
      It wasn't catching errs on order==3. Might be excessive?
      
      * UniPC Multistep fix tensor dtype/device on order=3
      
      * UniPC UTs Add v_pred to fp16 test iter
      
      For completions sake. Probably overkill?
      19ab04ff
  12. 02 Apr, 2024 4 commits
    • Sayak Paul's avatar
      add: utility to format our docs too 📜 (#7314) · 4a343077
      Sayak Paul authored
      * add: utility to format our docs too 📜
      
      * debugging saga
      
      * fix: message
      
      * checking
      
      * should be fixed.
      
      * revert pipeline_fixture
      
      * remove empty line
      
      * make style
      
      * fix: setup.py
      
      * style.
      4a343077
    • Bagheera's avatar
      7529 do not disable autocast for cuda devices (#7530) · 8e963d1c
      Bagheera authored
      
      
      * 7529 do not disable autocast for cuda devices
      
      * Remove typecasting error check for non-mps platforms, as a correct autocast implementation makes it a non-issue
      
      * add autocast fix to other training examples
      
      * disable native_amp for dreambooth (sdxl)
      
      * disable native_amp for pix2pix (sdxl)
      
      * remove tests from remaining files
      
      * disable native_amp on huggingface accelerator for every training example that uses it
      
      * convert more usages of autocast to nullcontext, make style fixes
      
      * make style fixes
      
      * style.
      
      * Empty-Commit
      
      ---------
      Co-authored-by: default avatarbghira <bghira@users.github.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      8e963d1c
    • Sayak Paul's avatar
      [Tests] Speed up fast pipelines part II (#7521) · 2b04ec2f
      Sayak Paul authored
      
      
      * start printing the tensors.
      
      * print full throttle
      
      * set static slices for 7 tests.
      
      * remove printing.
      
      * flatten
      
      * disable test for controlnet
      
      * what happens when things are seeded properly?
      
      * set the right value
      
      * style./
      
      * make pia test fail to check things
      
      * print.
      
      * fix pia.
      
      * checking for animatediff.
      
      * fix: animatediff.
      
      * video synthesis
      
      * final piece.
      
      * style.
      
      * print guess.
      
      * fix: assertion for control guess.
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      2b04ec2f
    • Sayak Paul's avatar
      [Chore] remove class assignments for linear and conv. (#7553) · 000fa82a
      Sayak Paul authored
      * remove class assignments for linear and conv.
      
      * fix: self.nn
      000fa82a