• Edna's avatar
    Chroma Pipeline (#11698) · 8adc6003
    Edna authored
    
    
    * working state from hameerabbasi and iddl
    
    * working state form hameerabbasi and iddl (transformer)
    
    * working state (normalization)
    
    * working state (embeddings)
    
    * add chroma loader
    
    * add chroma to mappings
    
    * add chroma to transformer init
    
    * take out variant stuff
    
    * get decently far in changing variant stuff
    
    * add chroma init
    
    * make chroma output class
    
    * add chroma transformer to dummy tp
    
    * add chroma to init
    
    * add chroma to init
    
    * fix single file
    
    * update
    
    * update
    
    * add chroma to auto pipeline
    
    * add chroma to pipeline init
    
    * change to chroma transformer
    
    * take out variant from blocks
    
    * swap embedder location
    
    * remove prompt_2
    
    * work on swapping text encoders
    
    * remove mask function
    
    * dont modify mask (for now)
    
    * wrap attn mask
    
    * no attn mask (can't get it to work)
    
    * remove pooled prompt embeds
    
    * change to my own unpooled embeddeer
    
    * fix load
    
    * take pooled projections out of transformer
    
    * ensure correct dtype for chroma embeddings
    
    * update
    
    * use dn6 attn mask + fix true_cfg_scale
    
    * use chroma pipeline output
    
    * use DN6 embeddings
    
    * remove guidance
    
    * remove guidance embed (pipeline)
    
    * remove guidance from embeddings
    
    * don't return length
    
    * dont change dtype
    
    * remove unused stuff, fix up docs
    
    * add chroma autodoc
    
    * add .md (oops)
    
    * initial chroma docs
    
    * undo don't change dtype
    
    * undo arxiv change
    
    unsure why that happened
    
    * fix hf papers regression in more places
    
    * Update docs/source/en/api/pipelines/chroma.md
    Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
    
    * do_cfg -> self.do_classifier_free_guidance
    
    * Update docs/source/en/api/models/chroma_transformer.md
    Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
    
    * Update chroma.md
    
    * Move chroma layers into transformer
    
    * Remove pruned AdaLayerNorms
    
    * Add chroma fast tests
    
    * (untested) batch cond and uncond
    
    * Add # Copied from for shift
    
    * Update # Copied from statements
    
    * update norm imports
    
    * Revert cond + uncond batching
    
    * Add transformer tests
    
    * move chroma test (oops)
    
    * chroma init
    
    * fix chroma pipeline fast tests
    
    * Update src/diffusers/models/transformers/transformer_chroma.py
    Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
    
    * Move Approximator and Embeddings
    
    * Fix auto pipeline + make style, quality
    
    * make style
    
    * Apply style fixes
    
    * switch to new input ids
    
    * fix # Copied from error
    
    * remove # Copied from on protected members
    
    * try to fix import
    
    * fix import
    
    * make fix-copes
    
    * revert style fix
    
    * update chroma transformer params
    
    * update chroma transformer approximator init params
    
    * update to pad tokens
    
    * fix batch inference
    
    * Make more pipeline tests work
    
    * Make most transformer tests work
    
    * fix docs
    
    * make style, make quality
    
    * skip batch tests
    
    * fix test skipping
    
    * fix test skipping again
    
    * fix for tests
    
    * Fix all pipeline test
    
    * update
    
    * push local changes, fix docs
    
    * add encoder test, remove pooled dim
    
    * default proj dim
    
    * fix tests
    
    * fix equal size list input
    
    * update
    
    * push local changes, fix docs
    
    * add encoder test, remove pooled dim
    
    * default proj dim
    
    * fix tests
    
    * fix equal size list input
    
    * Revert "fix equal size list input"
    
    This reverts commit 3fe4ad67d58d83715bc238f8654f5e90bfc5653c.
    
    * update
    
    * update
    
    * update
    
    * update
    
    * update
    
    ---------
    Co-authored-by: default avatarDhruv Nair <dhruv.nair@gmail.com>
    Co-authored-by: default avatargithub-actions[bot] <github-actions[bot]@users.noreply.github.com>
    8adc6003
__init__.py 32.4 KB