1. 24 May, 2024 2 commits
    • Yue Wu's avatar
      sampling bug fix in diffusers tutorial "basic_training.md" (#8223) · 1096f88e
      Yue Wu authored
      sampling bug fix in basic_training.md
      
      In the diffusers basic training tutorial, setting the manual seed argument (generator=torch.manual_seed(config.seed)) in the pipeline call inside evaluate() function rewinds the dataloader shuffling, leading to overfitting due to the model seeing same sequence of training examples after every evaluation call. Using generator=torch.Generator(device='cpu').manual_seed(config.seed) avoids this.
      1096f88e
    • Dhruv Nair's avatar
      Clean up `from_single_file` docs (#8268) · cef4a512
      Dhruv Nair authored
      * update
      
      * update
      cef4a512
  2. 21 May, 2024 1 commit
  3. 20 May, 2024 2 commits
  4. 13 May, 2024 1 commit
  5. 10 May, 2024 4 commits
    • Mark Van Aken's avatar
      #7535 Update FloatTensor type hints to Tensor (#7883) · be4afa0b
      Mark Van Aken authored
      * find & replace all FloatTensors to Tensor
      
      * apply formatting
      
      * Update torch.FloatTensor to torch.Tensor in the remaining files
      
      * formatting
      
      * Fix the rest of the places where FloatTensor is used as well as in documentation
      
      * formatting
      
      * Update new file from FloatTensor to Tensor
      be4afa0b
    • Sayak Paul's avatar
      [Core] introduce videoprocessor. (#7776) · 04f4bd54
      Sayak Paul authored
      
      
      * introduce videoprocessor.
      
      * fix quality
      
      * address yiyi's feedback
      
      * fix preprocess_video call.
      
      * video_processor -> image_processor
      
      * fix
      
      * fix more.
      
      * quality
      
      * image_processor -> video_processor
      
      * support List[List[PIL.Image.Image]]
      
      * change to video_processor.
      
      * documentation
      
      * Apply suggestions from code review
      
      * changes
      
      * remove print.
      
      * refactor video processor (part # 7776) (#7861)
      
      * update
      
      * update remove deprecate
      
      * Update src/diffusers/video_processor.py
      
      * update
      
      * Apply suggestions from code review
      
      * deprecate list of 5d for video and list of 4d for image + apply other feedbacks
      
      * up
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * add doc.
      
      * tensor2vid -> postprocess_video.
      
      * refactor preprocess with preprocess_video
      
      * set default values.
      
      * empty commit
      
      * more refactoring of prepare_latents in animatediff vid2vid
      
      * checking documentation
      
      * remove documentation for now.
      
      * fix animatediff sdxl
      
      * fix test failure [part of video processor PR] (#7905)
      
      up
      
      * remove preceed_with_frames.
      
      * doc
      
      * fix
      
      * fix
      
      * remove video input as a single-frame video.
      
      ---------
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      04f4bd54
    • Sayak Paul's avatar
      add missing image processors to the docs (#7910) · 82be58c5
      Sayak Paul authored
      add missing processors.
      82be58c5
    • Sayak Paul's avatar
      upgrade to python 3.10 in the Dockerfiles (#7893) · 66956356
      Sayak Paul authored
      * upgrade to python 3.10
      
      * fix
      
      * try https://askubuntu.com/questions/1459694/can-not-find-python3-10-after-apt-get-installation
      
      * fix
      
      * up
      
      * yes
      
      * okay
      
      * up
      
      * up
      
      * up
      
      * up
      
      * up
      
      * check
      
      * okay
      
      * up
      
      * i[
      
      * fix
      66956356
  6. 09 May, 2024 2 commits
  7. 08 May, 2024 1 commit
    • Aryan's avatar
      [Pipeline] AnimateDiff SDXL (#6721) · 818f7607
      Aryan authored
      
      
      * update conversion script to handle motion adapter sdxl checkpoint
      
      * add animatediff xl
      
      * handle addition_embed_type
      
      * fix output
      
      * update
      
      * add imports
      
      * make fix-copies
      
      * add decode latents
      
      * update docstrings
      
      * add animatediff sdxl to docs
      
      * remove unnecessary lines
      
      * update example
      
      * add test
      
      * revert conv_in conv_out kernel param
      
      * remove unused param addition_embed_type_num_heads
      
      * latest IPAdapter impl
      
      * make fix-copies
      
      * fix return
      
      * add IPAdapterTesterMixin to tests
      
      * fix return
      
      * revert based on suggestion
      
      * add freeinit
      
      * fix test_to_dtype test
      
      * use StableDiffusionMixin instead of different helper methods
      
      * fix progress bar iterations
      
      * apply suggestions from review
      
      * hardcode flip_sin_to_cos and freq_shift
      
      * make fix-copies
      
      * fix ip adapter implementation
      
      * fix last failing test
      
      * make style
      
      * Update docs/source/en/api/pipelines/animatediff.md
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * remove todo
      
      * fix doc-builder errors
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      818f7607
  8. 07 May, 2024 1 commit
  9. 06 May, 2024 1 commit
  10. 03 May, 2024 2 commits
  11. 30 Apr, 2024 1 commit
  12. 28 Apr, 2024 1 commit
  13. 26 Apr, 2024 1 commit
  14. 25 Apr, 2024 2 commits
  15. 23 Apr, 2024 1 commit
  16. 22 Apr, 2024 2 commits
  17. 19 Apr, 2024 1 commit
  18. 17 Apr, 2024 3 commits
  19. 16 Apr, 2024 1 commit
    • UmerHA's avatar
      Fixing implementation of ControlNet-XS (#6772) · fda1531d
      UmerHA authored
      
      
      * CheckIn - created DownSubBlocks
      
      * Added extra channels, implemented subblock fwd
      
      * Fixed connection sizes
      
      * checkin
      
      * Removed iter, next in forward
      
      * Models for SD21 & SDXL run through
      
      * Added back pipelines, cleared up connections
      
      * Cleaned up connection creation
      
      * added debug logs
      
      * updated logs
      
      * logs: added input loading
      
      * Update umer_debug_logger.py
      
      * log: Loading hint
      
      * Update umer_debug_logger.py
      
      * added logs
      
      * Changed debug logging
      
      * debug: added more logs
      
      * Fixed num_norm_groups
      
      * Debug: Logging all of SDXL input
      
      * Update umer_debug_logger.py
      
      * debug: updated logs
      
      * checkim
      
      * Readded tests
      
      * Removed debug logs
      
      * Fixed Slow Tests
      
      * Added value ckecks | Updated model_cpu_offload_seq
      
      * accelerate-offloading works ; fast tests work
      
      * Made unet & addon explicit in controlnet
      
      * Updated slow tests
      
      * Added dtype/device to ControlNetXS
      
      * Filled in test model paths
      
      * Added image_encoder/feature_extractor to XL pipe
      
      * Fixed fast tests
      
      * Added comments and docstrings
      
      * Fixed copies
      
      * Added docs ; Updates slow tests
      
      * Moved changes to UNetMidBlock2DCrossAttn
      
      * tiny cleanups
      
      * Removed stray prints
      
      * Removed ip adapters + freeU
      
      - Removed ip adapters + freeU as they don't make sense for ControlNet-XS
      - Fixed imports of UNet components
      
      * Fixed test_save_load_float16
      
      * Make style, quality, fix-copies
      
      * Changed loading/saving API for ControlNetXS
      
      - Changed loading/saving API for ControlNetXS
      - other small fixes
      
      * Removed ControlNet-XS from research examples
      
      * Make style, quality, fix-copies
      
      * Small fixes
      
      - deleted ControlNetXSModel.init_original
      - added time_embedding_mix to StableDiffusionControlNetXSPipeline .from_pretrained / StableDiffusionXLControlNetXSPipeline.from_pretrained
      - fixed copy hints
      
      * checkin May 11 '23
      
      * CheckIn Mar 12 '24
      
      * Fixed tests for SD
      
      * Added tests for UNetControlNetXSModel
      
      * Fixed SDXL tests
      
      * cleanup
      
      * Delete Pipfile
      
      * CheckIn Mar 20
      
      Started replacing sub blocks  by `ControlNetXSCrossAttnDownBlock2D` and `ControlNetXSCrossAttnUplock2D`
      
      * check-in Mar 23
      
      * checkin 24 Mar
      
      * Created init for UNetCnxs and CnxsAddon
      
      * CheckIn
      
      * Made from_modules, from_unet and no_control work
      
      * make style,quality,fix-copies & small changes
      
      * Fixed freezing
      
      * Added gradient ckpt'ing; fixed tests
      
      * Fix slow tests(+compile) ; clear naming confusion
      
      * Don't create UNet in init ; removed class_emb
      
      * Incorporated review feedback
      
      - Deleted get_base_pipeline /  get_controlnet_addon for pipes
      - Pipes inherit from StableDiffusionXLPipeline
      - Made module dicts for cnxs-addon's down/mid/up classes
      - Added support for qkv fusion and freeU
      
      * Make style, quality, fix-copies
      
      * Implemented review feedback
      
      * Removed compatibility check for vae/ctrl embedding
      
      * make style, quality, fix-copies
      
      * Delete Pipfile
      
      * Integrated review feedback
      
      - Importing ControlNetConditioningEmbedding now
      - get_down/mid/up_block_addon now outside class
      - renamed `do_control` to `apply_control`
      
      * Reduced size of test tensors
      
      For this, added `norm_num_groups` as parameter everywhere
      
      * Renamed cnxs-`Addon` to cnxs-`Adapter`
      
      - `ControlNetXSAddon` -> `ControlNetXSAdapter`
      - `ControlNetXSAddonDownBlockComponents` -> `DownBlockControlNetXSAdapter`, and similarly for mid/up
      - `get_mid_block_addon` -> `get_mid_block_adapter`, and similarly for mid/up
      
      * Fixed save_pretrained/from_pretrained bug
      
      * Removed redundant code
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      fda1531d
  20. 11 Apr, 2024 1 commit
  21. 10 Apr, 2024 3 commits
    • Steven Liu's avatar
      [docs] Prompt enhancer (#7565) · 1d480298
      Steven Liu authored
      * prompt enhance
      
      * edits
      
      * align titles
      
      * feedback
      
      * feedback
      
      * feedback
      
      * link to style
      1d480298
    • Sayak Paul's avatar
      [docs] remove duplicate tip block. (#7625) · a402431d
      Sayak Paul authored
      remove duplicate tip block.
      a402431d
    • Sayak Paul's avatar
      [Core] add "balanced" `device_map` support to pipelines (#6857) · 3e4a6bd2
      Sayak Paul authored
      
      
      * get device <-> component mapping when using multiple gpus.
      
      * condition the device_map bits.
      
      * relax condition
      
      * device_map progress.
      
      * device_map enhancement
      
      * some cleaning up and debugging
      
      * Apply suggestions from code review
      Co-authored-by: default avatarMarc Sun <57196510+SunMarc@users.noreply.github.com>
      
      * incorporate suggestions from PR.
      
      * remove multi-gpu condition for now.
      
      * guard check the component -> device mapping
      
      * fix: device_memory variable
      
      * dispatching transformers model to have force_hooks=True
      
      * better guarding for transformers device_map
      
      * introduce support balanced_low_memory and balanced_ultra_low_memory.
      
      * remove device_map patch.
      
      * fix: intermediate variable scoping.
      
      * fix: condition in cpu offload.
      
      * fix: flax class restrictions.
      
      * remove modifications from cpu_offload and model_offload
      
      * incorporate changes.
      
      * add a simple forward pass test
      
      * add: torch_device in get_inputs()
      
      * add: tests
      
      * remove print
      
      * safe-guard to(), model offloading and cpu offloading when balanced is used as a device_map.
      
      * style
      
      * remove .
      
      * safeguard device_map with more checks and remove invalid device_mapping strategues.
      
      * make  a class attribute and adjust tests accordingly.
      
      * fix device_map check
      
      * fix test
      
      * adjust comment
      
      * fix: device_map attribute
      
      * fix: dispatching.
      
      * max_memory test for pipeline
      
      * version guard the tests
      
      * fix guard.
      
      * address review feedback.
      
      * reset_device_map method.
      
      * add: test for reset_hf_device_map
      
      * fix a couple things.
      
      * add reset_device_map() in the error message.
      
      * add tests for checking reset_device_map doesn't have unintended consequences.
      
      * fix reset_device_map and offloading tests.
      
      * create _get_final_device_map utility.
      
      * hf_device_map -> _hf_device_map
      
      * add documentation
      
      * add notes suggested by Marc.
      
      * styling.
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      
      * move updates within gpu condition.
      
      * other docs related things
      
      * note on ignore a device not specified in .
      
      * provide a suggestion if device mapping errors out.
      
      * fix: typo.
      
      * _hf_device_map -> hf_device_map
      
      * Empty-Commit
      
      * add: example hf_device_map.
      
      ---------
      Co-authored-by: default avatarMarc Sun <57196510+SunMarc@users.noreply.github.com>
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      Co-authored-by: default avatarPedro Cuenca <pedro@huggingface.co>
      3e4a6bd2
  22. 08 Apr, 2024 2 commits
  23. 01 Apr, 2024 1 commit
  24. 29 Mar, 2024 1 commit
    • UmerHA's avatar
      Implements Blockwise lora (#7352) · 03024468
      UmerHA authored
      
      
      * Initial commit
      
      * Implemented block lora
      
      - implemented block lora
      - updated docs
      - added tests
      
      * Finishing up
      
      * Reverted unrelated changes made by make style
      
      * Fixed typo
      
      * Fixed bug + Made text_encoder_2 scalable
      
      * Integrated some review feedback
      
      * Incorporated review feedback
      
      * Fix tests
      
      * Made every module configurable
      
      * Adapter to new lora test structure
      
      * Final cleanup
      
      * Some more final fixes
      
      - Included examples in `using_peft_for_inference.md`
      - Added hint that only attns are scaled
      - Removed NoneTypes
      - Added test to check mismatching lens of adapter names / weights raise error
      
      * Update using_peft_for_inference.md
      
      * Update using_peft_for_inference.md
      
      * Make style, quality, fix-copies
      
      * Updated tutorial;Warning if scale/adapter mismatch
      
      * floats are forwarded as-is; changed tutorial scale
      
      * make style, quality, fix-copies
      
      * Fixed typo in tutorial
      
      * Moved some warnings into `lora_loader_utils.py`
      
      * Moved scale/lora mismatch warnings back
      
      * Integrated final review suggestions
      
      * Empty commit to trigger CI
      
      * Reverted emoty commit to trigger CI
      
      ---------
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      03024468
  25. 25 Mar, 2024 1 commit
  26. 22 Mar, 2024 1 commit