1. 25 Nov, 2025 2 commits
    • Sayak Paul's avatar
      let's go Flux2 🚀 (#12711) · 5ffb73d4
      Sayak Paul authored
      
      
      * add vae
      
      * Initial commit for Flux 2 Transformer implementation
      
      * add pipeline part
      
      * small edits to the pipeline and conversion
      
      * update conversion script
      
      * fix
      
      * up up
      
      * finish pipeline
      
      * Remove Flux IP Adapter logic for now
      
      * Remove deprecated 3D id logic
      
      * Remove ControlNet logic for now
      
      * Add link to ViT-22B paper as reference for parallel transformer blocks such as the Flux 2 single stream block
      
      * update pipeline
      
      * Don't use biases for input projs and output AdaNorm
      
      * up
      
      * Remove bias for double stream block text QKV projections
      
      * Add script to convert Flux 2 transformer to diffusers
      
      * make style and make quality
      
      * fix a few things.
      
      * allow sft files to go.
      
      * fix image processor
      
      * fix batch
      
      * style a bit
      
      * Fix some bugs in Flux 2 transformer implementation
      
      * Fix dummy input preparation and fix some test bugs
      
      * fix dtype casting in timestep guidance module.
      
      * resolve conflicts.,
      
      * remove ip adapter stuff.
      
      * Fix Flux 2 transformer consistency test
      
      * Fix bug in Flux2TransformerBlock (double stream block)
      
      * Get remaining Flux 2 transformer tests passing
      
      * make style; make quality; make fix-copies
      
      * remove stuff.
      
      * fix type annotaton.
      
      * remove unneeded stuff from tests
      
      * tests
      
      * up
      
      * up
      
      * add sf support
      
      * Remove unused IP Adapter and ControlNet logic from transformer (#9)
      
      * copied from
      
      * Apply suggestions from code review
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarapolinário <joaopaulo.passos@gmail.com>
      
      * up
      
      * up
      
      * up
      
      * up
      
      * up
      
      * Refactor Flux2Attention into separate classes for double stream and single stream attention
      
      * Add _supports_qkv_fusion to AttentionModuleMixin to allow subclasses to disable QKV fusion
      
      * Have Flux2ParallelSelfAttention inherit from AttentionModuleMixin with _supports_qkv_fusion=False
      
      * Log debug message when calling fuse_projections on a AttentionModuleMixin subclass that does not support QKV fusion
      
      * Address review comments
      
      * Update src/diffusers/pipelines/flux2/pipeline_flux2.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * up
      
      * Remove maybe_allow_in_graph decorators for Flux 2 transformer blocks (#12)
      
      * up
      
      * support ostris loras. (#13)
      
      * up
      
      * update schdule
      
      * up
      
      * up (#17)
      
      * add training scripts (#16)
      
      * add training scripts
      Co-authored-by: default avatarLinoy Tsaban <linoytsaban@gmail.com>
      
      * model cpu offload in validation.
      
      * add flux.2 readme
      
      * add img2img and tests
      
      * cpu offload in log validation
      
      * Apply suggestions from code review
      
      * fix
      
      * up
      
      * fixes
      
      * remove i2i training tests for now.
      
      ---------
      Co-authored-by: default avatarLinoy Tsaban <linoytsaban@gmail.com>
      Co-authored-by: default avatarlinoytsaban <linoy@huggingface.co>
      
      * up
      
      ---------
      Co-authored-by: default avataryiyixuxu <yixu310@gmail.com>
      Co-authored-by: default avatarDaniel Gu <dgu8957@gmail.com>
      Co-authored-by: default avataryiyi@huggingface.co <yiyi@ip-10-53-87-203.ec2.internal>
      Co-authored-by: default avatardg845 <58458699+dg845@users.noreply.github.com>
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      Co-authored-by: default avatarapolinário <joaopaulo.passos@gmail.com>
      Co-authored-by: default avataryiyi@huggingface.co <yiyi@ip-26-0-160-103.ec2.internal>
      Co-authored-by: default avatarLinoy Tsaban <linoytsaban@gmail.com>
      Co-authored-by: default avatarlinoytsaban <linoy@huggingface.co>
      5ffb73d4
    • Junsong Chen's avatar
      fix typo in docs (#12675) · d33d9f67
      Junsong Chen authored
      
      
      * fix typo in docs
      
      * Update docs/source/en/api/pipelines/sana_video.md
      Co-authored-by: default avatardg845 <58458699+dg845@users.noreply.github.com>
      
      ---------
      Co-authored-by: default avatardg845 <58458699+dg845@users.noreply.github.com>
      d33d9f67
  2. 17 Nov, 2025 1 commit
  3. 15 Nov, 2025 1 commit
    • dg845's avatar
      Update Wan Animate Docs (#12658) · a9e4883b
      dg845 authored
      * Update the Wan Animate docs to reflect the most recent code
      
      * Further explain input preprocessing and link to original Wan Animate preprocessing scripts
      a9e4883b
  4. 13 Nov, 2025 2 commits
  5. 10 Nov, 2025 1 commit
  6. 06 Nov, 2025 1 commit
  7. 04 Nov, 2025 1 commit
  8. 28 Oct, 2025 3 commits
    • galbria's avatar
      Bria fibo (#12545) · 84e16575
      galbria authored
      
      
      * Bria FIBO pipeline
      
      * style fixs
      
      * fix CR
      
      * Refactor BriaFibo classes and update pipeline parameters
      
      - Updated BriaFiboAttnProcessor and BriaFiboAttention classes to reflect changes from Flux equivalents.
      - Modified the _unpack_latents method in BriaFiboPipeline to improve clarity.
      - Increased the default max_sequence_length to 3000 and added a new optional parameter do_patching.
      - Cleaned up test_pipeline_bria_fibo.py by removing unused imports and skipping unsupported tests.
      
      * edit the docs of FIBO
      
      * Remove unused BriaFibo imports and update CPU offload method in BriaFiboPipeline
      
      * Refactor FIBO classes to BriaFibo naming convention
      
      - Updated class names from FIBO to BriaFibo for consistency across the module.
      - Modified instances of FIBOEmbedND, FIBOTimesteps, TextProjection, and TimestepProjEmbeddings to reflect the new naming.
      - Ensured all references in the BriaFiboTransformer2DModel are updated accordingly.
      
      * Add BriaFiboTransformer2DModel import to transformers module
      
      * Remove unused BriaFibo imports from modular pipelines and add BriaFiboTransformer2DModel and BriaFiboPipeline classes to dummy objects for enhanced compatibility with torch and transformers.
      
      * Update BriaFibo classes with copied documentation and fix import typo in pipeline module
      
      - Added documentation comments indicating the source of copied code in BriaFiboTransformerBlock and _pack_latents methods.
      - Corrected the import statement for BriaFiboPipeline in the pipelines module.
      
      * Remove unused BriaFibo imports from __init__.py to streamline modular pipelines.
      
      * Refactor documentation comments in BriaFibo classes to indicate inspiration from existing implementations
      
      - Updated comments in BriaFiboAttnProcessor, BriaFiboAttention, and BriaFiboPipeline to reflect that the code is inspired by other modules rather than copied.
      - Enhanced clarity on the origins of the methods to maintain proper attribution.
      
      * change Inspired by to Based on
      
      * add reference link and fix trailing whitespace
      
      * Add BriaFiboTransformer2DModel documentation and update comments in BriaFibo classes
      
      - Introduced a new documentation file for BriaFiboTransformer2DModel.
      - Updated comments in BriaFiboAttnProcessor, BriaFiboAttention, and BriaFiboPipeline to clarify the origins of the code, indicating copied sources for better attribution.
      
      ---------
      Co-authored-by: default avatarsayakpaul <spsayakpaul@gmail.com>
      84e16575
    • Meatfucker's avatar
      Fix typos in kandinsky5 docs (#12552) · 40528e9a
      Meatfucker authored
      Update kandinsky5.md
      
      Fix typos
      40528e9a
    • Lev Novitskiy's avatar
      Kandinsky 5 10 sec (NABLA suport) (#12520) · 5afbcce1
      Lev Novitskiy authored
      
      
      * add transformer pipeline first version
      
      * updates
      
      * fix 5sec generation
      
      * rewrite Kandinsky5T2VPipeline to diffusers style
      
      * add multiprompt support
      
      * remove prints in pipeline
      
      * add nabla attention
      
      * Wrap Transformer in Diffusers style
      
      * fix license
      
      * fix prompt type
      
      * add gradient checkpointing and peft support
      
      * add usage example
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarÁlvaro Somoza <asomoza@users.noreply.github.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarÁlvaro Somoza <asomoza@users.noreply.github.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarÁlvaro Somoza <asomoza@users.noreply.github.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarÁlvaro Somoza <asomoza@users.noreply.github.com>
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarÁlvaro Somoza <asomoza@users.noreply.github.com>
      
      * remove unused imports
      
      * add 10 second models support
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * remove no_grad and simplified prompt paddings
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * moved template to __init__
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * moved sdps inside processor
      
      * remove oneline function
      
      * remove reset_dtype methods
      
      * Transformer: move all methods to forward
      
      * separated prompt encoding
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * refactoring
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * refactoring acording to https://github.com/huggingface/diffusers/commit/acabbc0033d4b4933fc651766a4aa026db2e6dc1
      
      
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * fixed
      
      * style +copies
      
      * Update src/diffusers/models/transformers/transformer_kandinsky.py
      Co-authored-by: default avatarCharles <charles@huggingface.co>
      
      * more
      
      * Apply suggestions from code review
      
      * add lora loader doc
      
      * add compiled Nabla Attention
      
      * all needed changes for 10 sec models are added!
      
      * add docs
      
      * Apply style fixes
      
      * update docs
      
      * add kandinsky5 to toctree
      
      * add tests
      
      * fix tests
      
      * Apply style fixes
      
      * update tests
      
      ---------
      Co-authored-by: default avatarÁlvaro Somoza <asomoza@users.noreply.github.com>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarCharles <charles@huggingface.co>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      5afbcce1
  9. 27 Oct, 2025 1 commit
  10. 24 Oct, 2025 1 commit
  11. 22 Oct, 2025 1 commit
    • David Bertoin's avatar
      Prx (#12525) · dd07b19e
      David Bertoin authored
      * rename photon to prx
      
      * rename photon into prx
      
      * Revert .gitignore to state before commit b7fb0fe9d63bf766bbe3c42ac154a043796dd370
      
      * rename photon to prx
      
      * rename photon into prx
      
      * Revert .gitignore to state before commit b7fb0fe9d63bf766bbe3c42ac154a043796dd370
      
      * make fix-copies
      dd07b19e
  12. 21 Oct, 2025 1 commit
  13. 18 Oct, 2025 1 commit
  14. 17 Oct, 2025 1 commit
  15. 15 Oct, 2025 2 commits
  16. 14 Oct, 2025 1 commit
    • Meatfucker's avatar
      Fix missing load_video documentation and load_video import in... · a4bc8454
      Meatfucker authored
      Fix missing load_video documentation and load_video import in WanVideoToVideoPipeline example code (#12472)
      
      * Update utilities.md
      
      Update missing load_video documentation
      
      * Update pipeline_wan_video2video.py
      
      Fix missing load_video import in example code
      a4bc8454
  17. 13 Oct, 2025 1 commit
  18. 11 Oct, 2025 1 commit
  19. 30 Sep, 2025 2 commits
  20. 24 Sep, 2025 1 commit
    • Aryan's avatar
      Context Parallel w/ Ring & Ulysses & Unified Attention (#11941) · dcb6dd9b
      Aryan authored
      
      
      * update
      
      * update
      
      * add coauthor
      Co-Authored-By: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * improve test
      
      * handle ip adapter params correctly
      
      * fix chroma qkv fusion test
      
      * fix fastercache implementation
      
      * fix more tests
      
      * fight more tests
      
      * add back set_attention_backend
      
      * update
      
      * update
      
      * make style
      
      * make fix-copies
      
      * make ip adapter processor compatible with attention dispatcher
      
      * refactor chroma as well
      
      * remove rmsnorm assert
      
      * minify and deprecate npu/xla processors
      
      * update
      
      * refactor
      
      * refactor; support flash attention 2 with cp
      
      * fix
      
      * support sage attention with cp
      
      * make torch compile compatible
      
      * update
      
      * refactor
      
      * update
      
      * refactor
      
      * refactor
      
      * add ulysses backward
      
      * try to make dreambooth script work; accelerator backward not playing well
      
      * Revert "try to make dreambooth script work; accelerator backward not playing well"
      
      This reverts commit 768d0ea6fa6a305d12df1feda2afae3ec80aa449.
      
      * workaround compilation problems with triton when doing all-to-all
      
      * support wan
      
      * handle backward correctly
      
      * support qwen
      
      * support ltx
      
      * make fix-copies
      
      * Update src/diffusers/models/modeling_utils.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * apply review suggestions
      
      * update docs
      
      * add explanation
      
      * make fix-copies
      
      * add docstrings
      
      * support passing parallel_config to from_pretrained
      
      * apply review suggestions
      
      * make style
      
      * update
      
      * Update docs/source/en/api/parallel.md
      Co-authored-by: default avatarAryan <aryan@huggingface.co>
      
      * up
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      Co-authored-by: default avatarsayakpaul <spsayakpaul@gmail.com>
      dcb6dd9b
  21. 22 Sep, 2025 1 commit
  22. 10 Sep, 2025 1 commit
  23. 08 Sep, 2025 1 commit
  24. 31 Aug, 2025 1 commit
  25. 27 Aug, 2025 1 commit
  26. 26 Aug, 2025 1 commit
    • Tolga Cangöz's avatar
      Propose to update & upgrade SkyReels-V2 (#12167) · 5fcd5f56
      Tolga Cangöz authored
      * fix: update SkyReels-V2 documentation and moving into attn dispatcher
      
      * Refactors SkyReelsV2's attention implementation
      
      * style
      
      * up
      
      * Fixes formatting in SkyReels-V2 documentation
      
      Wraps the visual demonstration section in a Markdown code block.
      
      This change corrects the rendering of ASCII diagrams and examples, improving the overall readability of the document.
      
      * Docs: Condense example arrays in skyreels_v2 guide
      
      Improves the readability of the `step_matrix` examples by replacing long sequences of repeated numbers with a more compact `value×count` notation.
      
      This change makes the underlying data patterns in the examples easier to understand at a glance.
      
      * Add _repeated_blocks attribute to SkyReelsV2Transformer3DModel
      
      * Refactor rotary embedding calculations in SkyReelsV2 to separate cosine and sine frequencies
      
      * Enhance SkyReels-V2 documentation: update model loading for GPU support and remove outdated notes
      
      * up
      
      * up
      
      * Update model_id in SkyReels-V2 documentation
      
      * up
      
      * refactor: remove device_map parameter for model loading and add pipeline.to("cuda") for GPU allocation
      
      * fix: update copyright year to 2025 in skyreels_v2.md
      
      * docs: enhance parameter examples and formatting in skyreels_v2.md
      
      * docs: update example formatting and add notes on LoRA support in skyreels_v2.md
      
      * refactor: remove copied comments from transformer_wan in SkyReelsV2 classes
      
      * Clean up comments in skyreels_v2.md
      
      Removed comments about acceleration helpers and Flash Attention installation.
      
      * Add deprecation warning for `SkyReelsV2AttnProcessor2_0` class
      5fcd5f56
  27. 25 Aug, 2025 1 commit
  28. 22 Aug, 2025 3 commits
  29. 20 Aug, 2025 1 commit
    • galbria's avatar
      Bria 3 2 pipeline (#12010) · 7993be9e
      galbria authored
      
      
      * Add Bria model and pipeline to diffusers
      
      - Introduced `BriaTransformer2DModel` and `BriaPipeline` for enhanced image generation capabilities.
      - Updated import structures across various modules to include the new Bria components.
      - Added utility functions and output classes specific to the Bria pipeline.
      - Implemented tests for the Bria pipeline to ensure functionality and output integrity.
      
      * with working tests
      
      * style and quality pass
      
      * adding docs
      
      * add to overview
      
      * fixes from "make fix-copies"
      
      * Refactor transformer_bria.py and pipeline_bria.py: Introduce new EmbedND class for rotary position embedding, and enhance Timestep and TimestepProjEmbeddings classes. Add utility functions for handling negative prompts and generating original sigmas in pipeline_bria.py.
      
      * remove redundent and duplicates tests and fix bf16
      slow test
      
      * style fixes
      
      * small doc update
      
      * Enhance Bria 3.2 documentation and implementation
      
      - Updated the GitHub repository link for Bria 3.2.
      - Added usage instructions for the gated model access.
      - Introduced the BriaTransformerBlock and BriaAttention classes to the model architecture.
      - Refactored existing classes to integrate Bria-specific components, including BriaEmbedND and BriaPipeline.
      - Updated the pipeline output class to reflect Bria-specific functionality.
      - Adjusted test cases to align with the new Bria model structure.
      
      * Refactor Bria model components and update documentation
      
      - Removed outdated inference example from Bria 3.2 documentation.
      - Introduced the BriaTransformerBlock class to enhance model architecture.
      - Updated attention handling to use `attention_kwargs` instead of `joint_attention_kwargs`.
      - Improved import structure in the Bria pipeline to handle optional dependencies.
      - Adjusted test cases to reflect changes in model dtype assertions.
      
      * Update Bria model reference in documentation to reflect new file naming convention
      
      * Update docs/source/en/_toctree.yml
      
      * Refactor BriaPipeline to inherit from DiffusionPipeline instead of FluxPipeline, updating imports accordingly.
      
      * move the __call__ func to the end of file
      
      * Update BriaPipeline example to use bfloat16 for precision sensitivity for better result
      
      * make style && make quality &&  make fix-copiessource
      
      ---------
      Co-authored-by: default avatarLinoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
      Co-authored-by: default avatarAryan <contact.aryanvs@gmail.com>
      7993be9e
  30. 19 Aug, 2025 2 commits
  31. 18 Aug, 2025 1 commit