1. 20 Aug, 2025 1 commit
    • galbria's avatar
      Bria 3 2 pipeline (#12010) · 7993be9e
      galbria authored
      
      
      * Add Bria model and pipeline to diffusers
      
      - Introduced `BriaTransformer2DModel` and `BriaPipeline` for enhanced image generation capabilities.
      - Updated import structures across various modules to include the new Bria components.
      - Added utility functions and output classes specific to the Bria pipeline.
      - Implemented tests for the Bria pipeline to ensure functionality and output integrity.
      
      * with working tests
      
      * style and quality pass
      
      * adding docs
      
      * add to overview
      
      * fixes from "make fix-copies"
      
      * Refactor transformer_bria.py and pipeline_bria.py: Introduce new EmbedND class for rotary position embedding, and enhance Timestep and TimestepProjEmbeddings classes. Add utility functions for handling negative prompts and generating original sigmas in pipeline_bria.py.
      
      * remove redundent and duplicates tests and fix bf16
      slow test
      
      * style fixes
      
      * small doc update
      
      * Enhance Bria 3.2 documentation and implementation
      
      - Updated the GitHub repository link for Bria 3.2.
      - Added usage instructions for the gated model access.
      - Introduced the BriaTransformerBlock and BriaAttention classes to the model architecture.
      - Refactored existing classes to integrate Bria-specific components, including BriaEmbedND and BriaPipeline.
      - Updated the pipeline output class to reflect Bria-specific functionality.
      - Adjusted test cases to align with the new Bria model structure.
      
      * Refactor Bria model components and update documentation
      
      - Removed outdated inference example from Bria 3.2 documentation.
      - Introduced the BriaTransformerBlock class to enhance model architecture.
      - Updated attention handling to use `attention_kwargs` instead of `joint_attention_kwargs`.
      - Improved import structure in the Bria pipeline to handle optional dependencies.
      - Adjusted test cases to reflect changes in model dtype assertions.
      
      * Update Bria model reference in documentation to reflect new file naming convention
      
      * Update docs/source/en/_toctree.yml
      
      * Refactor BriaPipeline to inherit from DiffusionPipeline instead of FluxPipeline, updating imports accordingly.
      
      * move the __call__ func to the end of file
      
      * Update BriaPipeline example to use bfloat16 for precision sensitivity for better result
      
      * make style && make quality &&  make fix-copiessource
      
      ---------
      Co-authored-by: default avatarLinoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
      Co-authored-by: default avatarAryan <contact.aryanvs@gmail.com>
      7993be9e
  2. 15 Dec, 2024 1 commit
    • Junsong Chen's avatar
      [Sana] Add Sana, including `SanaPipeline`, `SanaPAGPipeline`,... · 5a196e3d
      Junsong Chen authored
      
      [Sana] Add Sana, including `SanaPipeline`, `SanaPAGPipeline`, `LinearAttentionProcessor`, `Flow-based DPM-sovler` and so on. (#9982)
      
      * first add a script for DC-AE;
      
      * DC-AE init
      
      * replace triton with custom implementation
      
      * 1. rename file and remove un-used codes;
      
      * no longer rely on omegaconf and dataclass
      
      * replace custom activation with diffuers activation
      
      * remove dc_ae attention in attention_processor.py
      
      * iinherit from ModelMixin
      
      * inherit from ConfigMixin
      
      * dc-ae reduce to one file
      
      * update downsample and upsample
      
      * clean code
      
      * support DecoderOutput
      
      * remove get_same_padding and val2tuple
      
      * remove autocast and some assert
      
      * update ResBlock
      
      * remove contents within super().__init__
      
      * Update src/diffusers/models/autoencoders/dc_ae.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * remove opsequential
      
      * update other blocks to support the removal of build_norm
      
      * remove build encoder/decoder project in/out
      
      * remove inheritance of RMSNorm2d from LayerNorm
      
      * remove reset_parameters for RMSNorm2d
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * remove device and dtype in RMSNorm2d __init__
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/models/autoencoders/dc_ae.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/models/autoencoders/dc_ae.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * Update src/diffusers/models/autoencoders/dc_ae.py
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      
      * remove op_list & build_block
      
      * remove build_stage_main
      
      * change file name to autoencoder_dc
      
      * move LiteMLA to attention.py
      
      * align with other vae decode output;
      
      * add DC-AE into init files;
      
      * update
      
      * make quality && make style;
      
      * quick push before dgx disappears again
      
      * update
      
      * make style
      
      * update
      
      * update
      
      * fix
      
      * refactor
      
      * refactor
      
      * refactor
      
      * update
      
      * possibly change to nn.Linear
      
      * refactor
      
      * make fix-copies
      
      * replace vae with ae
      
      * replace get_block_from_block_type to get_block
      
      * replace downsample_block_type from Conv to conv for consistency
      
      * add scaling factors
      
      * incorporate changes for all checkpoints
      
      * make style
      
      * move mla to attention processor file; split qkv conv to linears
      
      * refactor
      
      * add tests
      
      * from original file loader
      
      * add docs
      
      * add standard autoencoder methods
      
      * combine attention processor
      
      * fix tests
      
      * update
      
      * minor fix
      
      * minor fix
      
      * minor fix & in/out shortcut rename
      
      * minor fix
      
      * make style
      
      * fix paper link
      
      * update docs
      
      * update single file loading
      
      * make style
      
      * remove single file loading support; todo for DN6
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * add abstract
      
      * 1. add DCAE into diffusers;
      2. make style and make quality;
      
      * add DCAE_HF into diffusers;
      
      * bug fixed;
      
      * add SanaPipeline, SanaTransformer2D into diffusers;
      
      * add sanaLinearAttnProcessor2_0;
      
      * first update for SanaTransformer;
      
      * first update for SanaPipeline;
      
      * first success run SanaPipeline;
      
      * model output finally match with original model with the same intput;
      
      * code update;
      
      * code update;
      
      * add a flow dpm-solver scripts
      
      * 🎉[important update]
      1. Integrate flow-dpm-sovler into diffusers;
      2. finally run successfully on both `FlowMatchEulerDiscreteScheduler` and `FlowDPMSolverMultistepScheduler`;
      
      * 🎉🔧
      
      [important update & fix huge bugs!!]
      1. add SanaPAGPipeline & several related Sana linear attention operators;
      2. `SanaTransformer2DModel` not supports multi-resolution input;
      2. fix the multi-scale HW bugs in SanaPipeline and SanaPAGPipeline;
      3. fix the flow-dpm-solver set_timestep() init `model_output` and `lower_order_nums` bugs;
      
      * remove prints;
      
      * add convert sana official checkpoint to diffusers format Safetensor.
      
      * Update src/diffusers/models/transformers/sana_transformer_2d.py
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * Update src/diffusers/models/transformers/sana_transformer_2d.py
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * Update src/diffusers/models/transformers/sana_transformer_2d.py
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * Update src/diffusers/pipelines/pag/pipeline_pag_sana.py
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * Update src/diffusers/models/transformers/sana_transformer_2d.py
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * Update src/diffusers/models/transformers/sana_transformer_2d.py
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * Update src/diffusers/pipelines/sana/pipeline_sana.py
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * Update src/diffusers/pipelines/sana/pipeline_sana.py
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * update Sana for DC-AE's recent commit;
      
      * make style && make quality
      
      * Add StableDiffusion3PAGImg2Img Pipeline + Fix SD3 Unconditional PAG (#9932)
      
      * fix progress bar updates in SD 1.5 PAG Img2Img pipeline
      
      ---------
      Co-authored-by: default avatarVinh H. Pham <phamvinh257@gmail.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      
      * make the vae can be None in `__init__` of `SanaPipeline`
      
      * Update src/diffusers/models/transformers/sana_transformer_2d.py
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      
      * change the ae related code due to the latest update of DCAE branch;
      
      * change the ae related code due to the latest update of DCAE branch;
      
      * 1. change code based on AutoencoderDC;
      2. fix the bug of new GLUMBConv;
      3. run success;
      
      * update for solving conversation.
      
      * 1. fix bugs and run convert script success;
      2. Downloading ckpt from hub automatically;
      
      * make style && make quality;
      
      * 1. remove un-unsed parameters in init;
      2. code update;
      
      * remove test file
      
      * refactor; add docs; add tests; update conversion script
      
      * make style
      
      * make fix-copies
      
      * refactor
      
      * udpate pipelines
      
      * pag tests and refactor
      
      * remove sana pag conversion script
      
      * handle weight casting in conversion script
      
      * update conversion script
      
      * add a processor
      
      * 1. add bf16 pth file path;
      2. add complex human instruct in pipeline;
      
      * fix fast \tests
      
      * change gemma-2-2b-it ckpt to a non-gated repo;
      
      * fix the pth path bug in conversion script;
      
      * change grad ckpt to original; make style
      
      * fix the complex_human_instruct bug and typo;
      
      * remove dpmsolver flow scheduler
      
      * apply review suggestions
      
      * change the `FlowMatchEulerDiscreteScheduler` to default `DPMSolverMultistepScheduler` with flow matching scheduler.
      
      * fix the tokenizer.padding_side='right' bug;
      
      * update docs
      
      * make fix-copies
      
      * fix imports
      
      * fix docs
      
      * add integration test
      
      * update docs
      
      * update examples
      
      * fix convert_model_output in schedulers
      
      * fix failing tests
      
      ---------
      Co-authored-by: default avatarJunyu Chen <chenjydl2003@gmail.com>
      Co-authored-by: default avatarYiYi Xu <yixu310@gmail.com>
      Co-authored-by: default avatarSayak Paul <spsayakpaul@gmail.com>
      Co-authored-by: default avatarchenjy2003 <70215701+chenjy2003@users.noreply.github.com>
      Co-authored-by: default avatarAryan <aryan@huggingface.co>
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      Co-authored-by: default avatarhlky <hlky@hlky.ac>
      5a196e3d
  3. 11 Jul, 2024 1 commit
  4. 12 Jun, 2024 1 commit
  5. 25 Sep, 2023 2 commits
  6. 22 Sep, 2023 1 commit
    • Pedro Cuenca's avatar
      SDXL flax (#4254) · 3651b14c
      Pedro Cuenca authored
      
      
      * support transformer_layers_per block in flax UNet
      
      * add support for text_time additional embeddings to Flax UNet
      
      * rename attention layers for VAE
      
      * add shape asserts when renaming attention layers
      
      * transpose VAE attention layers
      
      * add pipeline flax SDXL code [WIP]
      
      * continue add pipeline flax SDXL code [WIP]
      
      * cleanup
      
      * Working on JIT support
      
      Fixed prompt embedding shapes so they work in parallel mode. Assuming we
      always have both text encoders for now, for simplicity.
      
      * Fixing embeddings (untested)
      
      * Remove spurious line
      
      * Shard guidance_scale when jitting.
      
      * Decode images
      
      * Fix sharding
      
      * style
      
      * Refiner UNet can be loaded.
      
      * Refiner / img2img pipeline
      
      * Allow latent outputs from base and latent inputs in refiner
      
      This makes it possible to chain base + refiner without having to use the
      vae decoder in the base model, the vae encoder in the refiner, skipping
      conversions to/from PIL, and avoiding TPU <-> CPU memory copies.
      
      * Adapt to FlaxCLIPTextModelOutput
      
      * Update Flax XL pipeline to FlaxCLIPTextModelOutput
      
      * make fix-copies
      
      * make style
      
      * add euler scheduler
      
      * Fix import
      
      * Fix copies, comment unused code.
      
      * Fix SDXL Flax imports
      
      * Fix euler discrete begin
      
      * improve init import
      
      * finish
      
      * put discrete euler in init
      
      * fix flax euler
      
      * Fix more
      
      * make style
      
      * correct init
      
      * correct init
      
      * Temporarily remove FlaxStableDiffusionXLImg2ImgPipeline
      
      * correct pipelines
      
      * finish
      
      ---------
      Co-authored-by: default avatarMartin Müller <martin.muller.me@gmail.com>
      Co-authored-by: default avatarpatil-suraj <surajp815@gmail.com>
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      3651b14c
  7. 11 Sep, 2023 1 commit
    • Dhruv Nair's avatar
      Lazy Import for Diffusers (#4829) · b6e0b016
      Dhruv Nair authored
      
      
      * initial commit
      
      * move modules to import struct
      
      * add dummy objects and _LazyModule
      
      * add lazy import to schedulers
      
      * clean up unused imports
      
      * lazy import on models module
      
      * lazy import for schedulers module
      
      * add lazy import to pipelines module
      
      * lazy import altdiffusion
      
      * lazy import audio diffusion
      
      * lazy import audioldm
      
      * lazy import consistency model
      
      * lazy import controlnet
      
      * lazy import dance diffusion ddim ddpm
      
      * lazy import deepfloyd
      
      * lazy import kandinksy
      
      * lazy imports
      
      * lazy import semantic diffusion
      
      * lazy imports
      
      * lazy import stable diffusion
      
      * move sd output to its own module
      
      * clean up
      
      * lazy import t2iadapter
      
      * lazy import unclip
      
      * lazy import versatile and vq diffsuion
      
      * lazy import vq diffusion
      
      * helper to fetch objects from modules
      
      * lazy import sdxl
      
      * lazy import txt2vid
      
      * lazy import stochastic karras
      
      * fix model imports
      
      * fix bug
      
      * lazy import
      
      * clean up
      
      * clean up
      
      * fixes for tests
      
      * fixes for tests
      
      * clean up
      
      * remove import of torch_utils from utils module
      
      * clean up
      
      * clean up
      
      * fix mistake import statement
      
      * dedicated modules for exporting and loading
      
      * remove testing utils from utils module
      
      * fixes from  merge conflicts
      
      * Update src/diffusers/pipelines/kandinsky2_2/__init__.py
      
      * fix docs
      
      * fix alt diffusion copied from
      
      * fix check dummies
      
      * fix more docs
      
      * remove accelerate import from utils module
      
      * add type checking
      
      * make style
      
      * fix check dummies
      
      * remove torch import from xformers check
      
      * clean up error message
      
      * fixes after upstream merges
      
      * dummy objects fix
      
      * fix tests
      
      * remove unused module import
      
      ---------
      Co-authored-by: default avatarPatrick von Platen <patrick.v.platen@gmail.com>
      b6e0b016