1. 19 Jun, 2025 2 commits
  2. 18 Jun, 2025 5 commits
  3. 17 Jun, 2025 2 commits
  4. 16 Jun, 2025 4 commits
    • David Berenstein's avatar
      Add Pruna optimization framework documentation (#11688) · 9b834f87
      David Berenstein authored
      
      
      * Add Pruna optimization framework documentation
      
      - Introduced a new section for Pruna in the table of contents.
      - Added comprehensive documentation for Pruna, detailing its optimization techniques, installation instructions, and examples for optimizing and evaluating models
      
      * Enhance Pruna documentation with image alt text and code block formatting
      
      - Added alt text to images for better accessibility and context.
      - Changed code block syntax from diff to python for improved clarity.
      
      * Add installation section to Pruna documentation
      
      - Introduced a new installation section in the Pruna documentation to guide users on how to install the framework.
      - Enhanced the overall clarity and usability of the documentation for new users.
      
      * Update pruna.md
      
      * Update pruna.md
      
      * Update Pruna documentation for model optimization and evaluation
      
      - Changed section titles for consistency and clarity, from "Optimizing models" to "Optimize models" and "Evaluating and benchmarking optimized models" to "Evaluate and benchmark models".
      - Enhanced descriptions to clarify the use of `diffusers` models and the evaluation process.
      - Added a new example for evaluating standalone `diffusers` models.
      - Updated references and links for better navigation within the documentation.
      
      * Refactor Pruna documentation for clarity and consistency
      
      - Removed outdated references to FLUX-juiced and streamlined the explanation of benchmarking.
      - Enhanced the description of evaluating standalone `diffusers` models.
      - Cleaned up code examples by removing unnecessary imports and comments for better readability.
      
      * Apply suggestions from code review
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      
      * Enhance Pruna documentation with new examples and clarifications
      
      - Added an image to illustrate the optimization process.
      - Updated the explanation for sharing and loading optimized models on the Hugging Face Hub.
      - Clarified the evaluation process for optimized models using the EvaluationAgent.
      - Improved descriptions for defining metrics and evaluating standalone diffusers models.
      
      ---------
      Co-authored-by: default avatarSteven Liu <59462357+stevhliu@users.noreply.github.com>
      9b834f87
    • Carl Thomé's avatar
      Fix misleading comment (#11722) · 81426b0f
      Carl Thomé authored
      81426b0f
    • Sayak Paul's avatar
      [training] show how metadata stuff should be incorporated in training scripts. (#11707) · f0dba33d
      Sayak Paul authored
      
      
      * show how metadata stuff should be incorporated in training scripts.
      
      * typing
      
      * fix
      
      ---------
      Co-authored-by: default avatarLinoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
      f0dba33d
    • Sayak Paul's avatar
      [LoRA ]fix flux lora loader when return_metadata is true for non-diffusers (#11716) · d1db4f85
      Sayak Paul authored
      * fix flux lora loader when return_metadata is true for non-diffusers
      
      * remove annotation
      d1db4f85
  5. 14 Jun, 2025 1 commit
    • Edna's avatar
      Chroma Pipeline (#11698) · 8adc6003
      Edna authored
      
      
      * working state from hameerabbasi and iddl
      
      * working state form hameerabbasi and iddl (transformer)
      
      * working state (normalization)
      
      * working state (embeddings)
      
      * add chroma loader
      
      * add chroma to mappings
      
      * add chroma to transformer init
      
      * take out variant stuff
      
      * get decently far in changing variant stuff
      
      * add chroma init
      
      * make chroma output class
      
      * add chroma transformer to dummy tp
      
      * add chroma to init
      
      * add chroma to init
      
      * fix single file
      
      * update
      
      * update
      
      * add chroma to auto pipeline
      
      * add chroma to pipeline init
      
      * change to chroma transformer
      
      * take out variant from blocks
      
      * swap embedder location
      
      * remove prompt_2
      
      * work on swapping text encoders
      
      * remove mask function
      
      * dont modify mask (for now)
      
      * wrap attn mask
      
      * no attn mask (can't get it to work)
      
      * remove pooled prompt embeds
      
      * change to my own unpooled embeddeer
      
      * fix load
      
      * take pooled projections out of transformer
      
      * ensure correct dtype for chroma embeddings
      
      * update
      
      * use dn6 attn mask + fix true_cfg_scale
      
      * use chroma pipeline output
      
      * use DN6 embeddings
      
      * remove guidance
      
      * remove guidance embed (pipeline)
      
      * remove guidance from embeddings
      
      * don't return length
      
      * dont change dtype
      
      * remove unused stuff, fix up docs
      
      * add chroma autodoc
      
      * add .md (oops)
      
      * initial chroma docs
      
      * undo don't change dtype
      
      * undo arxiv change
      
      unsure why that happened
      
      * fix hf papers regression in more places
      
      * Update docs/source/en/api/pipelines/chroma.md
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * do_cfg -> self.do_classifier_free_guidance
      
      * Update docs/source/en/api/models/chroma_transformer.md
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * Update chroma.md
      
      * Move chroma layers into transformer
      
      * Remove pruned AdaLayerNorms
      
      * Add chroma fast tests
      
      * (untested) batch cond and uncond
      
      * Add # Copied from for shift
      
      * Update # Copied from statements
      
      * update norm imports
      
      * Revert cond + uncond batching
      
      * Add transformer tests
      
      * move chroma test (oops)
      
      * chroma init
      
      * fix chroma pipeline fast tests
      
      * Update src/diffusers/models/transformers/transformer_chroma.py
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      
      * Move Approximator and Embeddings
      
      * Fix auto pipeline + make style, quality
      
      * make style
      
      * Apply style fixes
      
      * switch to new input ids
      
      * fix # Copied from error
      
      * remove # Copied from on protected members
      
      * try to fix import
      
      * fix import
      
      * make fix-copes
      
      * revert style fix
      
      * update chroma transformer params
      
      * update chroma transformer approximator init params
      
      * update to pad tokens
      
      * fix batch inference
      
      * Make more pipeline tests work
      
      * Make most transformer tests work
      
      * fix docs
      
      * make style, make quality
      
      * skip batch tests
      
      * fix test skipping
      
      * fix test skipping again
      
      * fix for tests
      
      * Fix all pipeline test
      
      * update
      
      * push local changes, fix docs
      
      * add encoder test, remove pooled dim
      
      * default proj dim
      
      * fix tests
      
      * fix equal size list input
      
      * update
      
      * push local changes, fix docs
      
      * add encoder test, remove pooled dim
      
      * default proj dim
      
      * fix tests
      
      * fix equal size list input
      
      * Revert "fix equal size list input"
      
      This reverts commit 3fe4ad67d58d83715bc238f8654f5e90bfc5653c.
      
      * update
      
      * update
      
      * update
      
      * update
      
      * update
      
      ---------
      Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
      Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
      8adc6003
  6. 13 Jun, 2025 5 commits
  7. 12 Jun, 2025 1 commit
  8. 11 Jun, 2025 9 commits
  9. 10 Jun, 2025 2 commits
  10. 09 Jun, 2025 3 commits
  11. 08 Jun, 2025 1 commit
  12. 06 Jun, 2025 3 commits
  13. 05 Jun, 2025 2 commits