- 12 Apr, 2023 11 commits
-
-
Patrick von Platen authored
* Finish docs textual inversion * Apply suggestions from code review Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Nipun Jindal authored
* [2737]: Add Karras DPMSolverMultistepScheduler * [2737]: Add Karras DPMSolverMultistepScheduler * Add test * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * fix: repo consistency. * remove Copied from statement from the set_timestep method. * fix: test * Empty commit. Co-authored-by:
njindal <njindal@adobe.com> --------- Co-authored-by:
njindal <njindal@adobe.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Sean Sube authored
* add support for prompt embeds to SD ONNX pipeline * fix up the pipeline copies * add prompt embeds param to other ONNX pipelines * fix up prompt embeds param for SD upscaling ONNX pipeline * add missing type annotations to ONNX pipes
-
Will Berman authored
* fix pipeline __setattr__ * add test --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Andy authored
* inital commit for lora test cases * help a bit with lora for 3d * fixed lora tests * replaced redundant code --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com>
-
Pedro Cuenca authored
* add use_memory_efficient params placeholder * test * add memory efficient attention jax * add memory efficient attention jax * newline * forgot dot * Rename use_memory_efficient * Keep dtype last. * Actually use key_chunk_size * Rename symbol * Apply style * Rename use_memory_efficient * Keep dtype last * Pass `use_memory_efficient_attention` in `from_pretrained` * Move JAX memory efficient attention to attention_flax. * Simple test. * style --------- Co-authored-by:
muhammad_hanif <muhammad_hanif@sofcograha.co.id> Co-authored-by:
MuhHanif <48muhhanif@gmail.com>
-
Susung Hong authored
* Update index.mdx * Edit docs & add HF space link * Only change equation numbers in comments
-
Sayak Paul authored
* fix: norm group test for UNet3D. * fix: unet rejig. * fix: unwrapping when running validation inputs. * unwrapping the unet too. * fix: device. * better unwrapping. * unwrapping before ema. * unwrapping.
-
Patrick von Platen authored
* fix slow tsets * make style
-
Sayak Paul authored
* add: first draft for a better LoRA enabler. * make fix-copies. * feat: backward compatibility. * add: entry to the docs. * add: tests. * fix: docs. * fix: norm group test for UNet3D. * feat: add support for flat dicts. * add depcrcation message instead of warning.
-
Sayak Paul authored
* fix: norm group test for UNet3D. * fix: type-casting issue in controlnet training.
-
- 11 Apr, 2023 18 commits
-
-
Will Berman authored
add AttnAddedKVProcessor2_0 block
-
Will Berman authored
add group norm type to attention processor cross attention norm This lets the cross attention norm use both a group norm block and a layer norm block. The group norm operates along the channels dimension and requires input shape (batch size, channels, *) where as the layer norm with a single `normalized_shape` dimension only operates over the least significant dimension i.e. (*, channels). The channels we want to normalize are the hidden dimension of the encoder hidden states. By convention, the encoder hidden states are always passed as (batch size, sequence length, hidden states). This means the layer norm can operate on the tensor without modification, but the group norm requires flipping the last two dimensions to operate on (batch size, hidden states, sequence length). All existing attention processors will have the same logic and we can consolidate it in a helper function `prepare_encoder_hidden_states` prepare_encoder_hidden_states -> norm_encoder_hidden_states re: @patrickvonplaten move norm_cross defined check to outside norm_encoder_hidden_states add missing attn.norm_cross check
-
Will Berman authored
* unet time embedding activation function * typo act_fn -> time_embedding_act_fn * flatten conditional
-
Chanchana Sornsoontorn authored
*
⚙ ️chore(train_controlnet) fix typo in logger message *⚙ ️chore(models) refactor modules order; make them the same as calling order When printing the BasicTransformerBlock to stdout, I think it's crucial that the attributes order are shown in proper order. And also previously the "3. Feed Forward" comment was not making sense. It should have been close to self.ff but it's instead next to self.norm3 * correct many tests * remove bogus file * make style * correct more tests * finish tests * fix one more * make style * make unclip deterministic *⚙ ️chore(models/attention) reorganize comments in BasicTransformerBlock class --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
Will Berman authored
* add only cross attention to simple attention blocks * add test for only_cross_attention re: @patrickvonplaten * mid_block_only_cross_attention better default allow mid_block_only_cross_attention to default to `only_cross_attention` when `only_cross_attention` is given as a single boolean
-
Pedro Cuenca authored
* Fix invocation of some slow tests. We use __call__ rather than pmapping the generation function ourselves because the number of static arguments is different now. * style
-
Pedro Cuenca authored
When doing generation manually and using guidance_scale as a static argument.
-
George Ogden authored
* Update documentation Based on sampling, the width and height must be powers of 2 as the samples halve in size each time * make style
-
Will Berman authored
* `AttentionProcessor.group_norm` num_channels should be `query_dim` The group_norm on the attention processor should really norm the number of channels in the query _not_ the inner dim. This wasn't caught before because the group_norm is only used by the added kv attention processors and the added kv attention processors are only used by the karlo models which are configured such that the inner dim is the same as the query dim. * add_{k,v}_proj should be projecting to inner_dim -
Will Berman authored
-
Will Berman authored
-
Patrick von Platen authored
-
J N Hearns authored
* Update composable_stable_diffusion.py Fix imports * Formatting * Formatting * Formatting
-
Steven Liu authored
* reuse-components * format
-
Patrick von Platen authored
* [Config] Fix config prints and save, load * Only use potential nn.Modules for dtype and device * Correct vae image processor * make sure in_channels is not accessed directly * make sure in channels is only accessed via config * Make sure schedulers only access config attributes * Make sure to access config in SAG * Fix vae processor and make style * add tests * uP * make style * Fix more naming issues * Final fix with vae config * change more
-
Patrick von Platen authored
-
Mishig authored
* Update contribution.mdx hotfix for doc-builder parsing quote in heading bug * quoteation replace
-
Pedro Cuenca authored
-
- 10 Apr, 2023 11 commits
-
-
Rogério Júnior authored
-
Andranik Movsisyan authored
* add TextToVideoZeroPipeline and CrossFrameAttnProcessor * add docs for text-to-video zero * add teaser image for text-to-video zero docs * Fix review changes. Add Documentation. Add test * clean up the codes in pipeline_text_to_video.py. Add descriptive comments and docstrings * make style && make quality * make fix-copies * make requested changes to docs. use huggingface server links for resources, delete res folder * make style && make quality && make fix-copies * make style && make quality * Apply suggestions from code review --------- Co-authored-by:Sayak Paul <spsayakpaul@gmail.com>
-
William Berman authored
-
William Berman authored
-
luanjintai authored
-
luanjintai authored
-
Pedro Cuenca authored
* Initial draft of Core ML docs. * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Fix Core ML spelling * Apply the rest of suggestions. * Attempt to fix hyperlink inside Tip. * Apply suggestions from code review Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com> * Apply suggestions from code review --------- Co-authored-by:
Steven Liu <59462357+stevhliu@users.noreply.github.com>
-
William Berman authored
`encoder_hid_dim` provides an additional projection for the input `encoder_hidden_states` from `encoder_hidden_dim` to `cross_attention_dim`
-
William Berman authored
-
William Berman authored
-
William Berman authored
-