"...text-generation-inference.git" did not exist on "77758f603b87a15dec153ca4d5b6bfb6832f1cff"
- 10 Mar, 2023 1 commit
-
-
Patrick von Platen authored
* [From pretrained] Speed-up loading from cache * up * Fix more * fix one more bug * make style * bigger refactor * factor out function * Improve more * better * deprecate return cache folder * clean up * improve tests * up * upload * add nice tests * simplify * finish * correct * fix version * rename * Apply suggestions from code review Co-authored-by:
Lucain <lucainp@gmail.com> * rename * correct doc string * correct more * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * apply code suggestions * finish --------- Co-authored-by:
Lucain <lucainp@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 09 Mar, 2023 1 commit
-
-
Patrick von Platen authored
* improve error message * upload
-
- 07 Mar, 2023 1 commit
-
-
clarencechen authored
* Improve dynamic threshold * Update code * Add dynamic threshold to ddim and ddpm * Encapsulate and leverage code copy mechanism Update style * Clean up DDPM/DDIM constructor arguments * add test * also add to unipc --------- Co-authored-by:
Peter Lin <peterlin9863@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 06 Mar, 2023 1 commit
-
-
Patrick von Platen authored
-
- 03 Mar, 2023 1 commit
-
-
alvanli authored
Remove explicit message argument
-
- 02 Mar, 2023 2 commits
-
-
Ilmari Heikkinen authored
* Tiled VAE for high-res text2img and img2img * vae tiling, fix formatting * enable_vae_tiling API and tests * tiled vae docs, disable tiling for images that would have only one tile * tiled vae tests, use channels_last memory format * tiled vae tests, use smaller test image * tiled vae tests, remove tiling test from fast tests * up * up * make style * Apply suggestions from code review * Apply suggestions from code review * Apply suggestions from code review * make style * improve naming * finish * apply suggestions * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * up --------- Co-authored-by:
Ilmari Heikkinen <ilmari@fhtr.org> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
Takuma Mori authored
* add scaffold - copied convert_controlnet_to_diffusers.py from convert_original_stable_diffusion_to_diffusers.py * Add support to load ControlNet (WIP) - this makes Missking Key error on ControlNetModel * Update to convert ControlNet without error msg - init impl for StableDiffusionControlNetPipeline - init impl for ControlNetModel * cleanup of commented out * split create_controlnet_diffusers_config() from create_unet_diffusers_config() - add config: hint_channels * Add input_hint_block, input_zero_conv and middle_block_out - this makes missing key error on loading model * add unet_2d_blocks_controlnet.py - copied from unet_2d_blocks.py as impl CrossAttnDownBlock2D,DownBlock2D - this makes missing key error on loading model * Add loading for input_hint_block, zero_convs and middle_block_out - this makes no error message on model loading * Copy from UNet2DConditionalModel except __init__ * Add ultra primitive test for ControlNetModel inference * Support ControlNetModel inference - without exceptions * copy forward() from UNet2DConditionModel * Impl ControlledUNet2DConditionModel inference - test_controlled_unet_inference passed * Frozen weight & biases for training * Minimized version of ControlNet/ControlledUnet - test_modules_controllnet.py passed * make style * Add support model loading for minimized ver * Remove all previous version files * from_pretrained and inference test passed * copied from pipeline_stable_diffusion.py except `__init__()` * Impl pipeline, pixel match test (almost) passed. * make style * make fix-copies * Fix to add import ControlNet blocks for `make fix-copies` * Remove einops dependency * Support np.ndarray, PIL.Image for controlnet_hint * set default config file as lllyasviel's * Add support grayscale (hw) numpy array * Add and update docstrings * add control_net.mdx * add control_net.mdx to toctree * Update copyright year * Fix to add PIL.Image RGB->BGR conversion - thanks @Mystfit * make fix-copies * add basic fast test for controlnet * add slow test for controlnet/unet * Ignore down/up_block len check on ControlNet * add a copy from test_stable_diffusion.py * Accept controlnet_hint is None * merge pipeline_stable_diffusion.py diff * Update class name to SDControlNetPipeline * make style * Baseline fast test almost passed (w long desc) * still needs investigate. Following didn't passed descriped in TODO comment: - test_stable_diffusion_long_prompt - test_stable_diffusion_no_safety_checker Following didn't passed same as stable_diffusion_pipeline: - test_attention_slicing_forward_pass - test_inference_batch_single_identical - test_xformers_attention_forwardGenerator_pass these seems come from calc accuracy. * Add note comment related vae_scale_factor * add test_stable_diffusion_controlnet_ddim * add assertion for vae_scale_factor != 8 * slow test of pipeline almost passed Failed: test_stable_diffusion_pipeline_with_model_offloading - ImportError: `enable_model_offload` requires `accelerate v0.17.0` or higher but currently latest version == 0.16.0 * test_stable_diffusion_long_prompt passed * test_stable_diffusion_no_safety_checker passed - due to its model size, move to slow test * remove PoC test files * fix num_of_image, prompt length issue add add test * add support List[PIL.Image] for controlnet_hint * wip * all slow test passed * make style * update for slow test * RGB(PIL)->BGR(ctrlnet) conversion * fixes * remove manual num_images_per_prompt test * add document * add `image` argument docstring * make style * Add line to correct conversion * add controlnet_conditioning_scale (aka control_scales strength) * rgb channel ordering by default * image batching logic * Add control image descriptions for each checkpoint * Only save controlnet model in conversion script * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py typo Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * add gerated image example * a depth mask -> a depth map * rename control_net.mdx to controlnet.mdx * fix toc title * add ControlNet abstruct and link * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py Co-authored-by:
dqueue <dbyqin@gmail.com> * remove controlnet constructor arguments re: @patrickvonplaten * [integration tests] test canny * test_canny fixes * [integration tests] test_depth * [integration tests] test_hed * [integration tests] test_mlsd * add channel order config to controlnet * [integration tests] test normal * [integration tests] test_openpose test_scribble * change height and width to default to conditioning image * [integration tests] test seg * style * test_depth fix * [integration tests] size fixes * [integration tests] cpu offloading * style * generalize controlnet embedding * fix conversion script * Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> * Style adapted to the documentation of pix2pix * merge main by hand * style * [docs] controlling generation doc nits * correct some things * add: controlnetmodel to autodoc. * finish docs * finish * finish 2 * correct images * finish controlnet * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * uP * upload model * up * up --------- Co-authored-by:
William Berman <WLBberman@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
dqueue <dbyqin@gmail.com> Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 01 Mar, 2023 2 commits
-
-
Pedro Cuenca authored
Bring flax attention naming in sync with PyTorch.
-
Patrick von Platen authored
-
- 27 Feb, 2023 1 commit
-
-
Patrick von Platen authored
* [Safetensors] Make sure metadata is saved * make style
-
- 17 Feb, 2023 2 commits
-
-
Pedro Cuenca authored
Fix typo in AttnProcessor2_0 symbol.
-
Suraj Patil authored
* add sdpa processor * don't use it by default * add some checks and style * typo * support torch sdpa in dreambooth example * use torch attn proc by default when available * typo * add attn mask * fix naming * being doc * doc * Apply suggestions from code review * polish * torctree * Apply suggestions from code review Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * better name * style * add benchamrk table * Update docs/source/en/optimization/torch2.0.mdx * up * fix example * check if processor is None * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * add fp32 benchmakr * Apply suggestions from code review Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> --------- Co-authored-by:
Sayak Paul <spsayakpaul@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 16 Feb, 2023 3 commits
-
-
fxmarty authored
replace torch.concat by torch.cat
-
Pedro Cuenca authored
* enable_model_offload PoC It's surprisingly more involved than expected, see comments in the PR. * Rename final_offload_hook * Invoke the vae forward hook manually. * Completely remove decoder. * Style * apply_forward_hook decorator * Rename method. * Style * Copy enable_model_cpu_offload * Fix copies. * Remove comment. * Fix copies * Missing import * Fix doc-builder style. * Merge main and fix again. * Add docs * Fix docs. * Add a couple of tests. * style
-
Patrick von Platen authored
[Variant] Add "variant" as input kwarg so to have better UX when downloading no_ema or fp16 weights (#2305) * [Variant] Add variant loading mechanism * clean * improve further * up * add tests * add some first tests * up * up * use path splittetx * add deprecate * deprecation warnings * improve docs * up * up * up * fix tests * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * correct code format * fix warning * finish * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Apply suggestions from code review Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update docs/source/en/using-diffusers/loading.mdx Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Apply suggestions from code review Co-authored-by:
Will Berman <wlbberman@gmail.com> Co-authored-by:
Suraj Patil <surajp815@gmail.com> * correct loading docs * finish --------- Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
Will Berman <wlbberman@gmail.com>
-
- 14 Feb, 2023 2 commits
-
-
Will Berman authored
* pipeline_variant * Add docs for when clip_stats_path is specified * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip_img2img.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip_img2img.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * prepare_latents # Copied from re: @patrickvonplaten * NoiseAugmentor->ImageNormalizer * stable_unclip_prior default to None re: @patrickvonplaten * prepare_prior_extra_step_kwargs * prior denoising scale model input * {DDIM,DDPM}Scheduler -> KarrasDiffusionSchedulers re: @patrickvonplaten * docs * Update docs/source/en/api/pipelines/stable_unclip.mdx Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> --------- Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Will Berman authored
* unet check length input * prep test file for changes * correct all tests * clean up --------- Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com>
-
- 13 Feb, 2023 1 commit
-
-
bddppq authored
* Fix running LoRA with xformers * support disabling xformers * reformat * Add test
-
- 07 Feb, 2023 3 commits
-
-
Patrick von Platen authored
* before running make style * remove left overs from flake8 * finish * make fix-copies * final fix * more fixes
-
Pedro Cuenca authored
* mps cross-attention hack: don't crash on fp16 * Make conversion explicit.
-
YiYi Xu authored
* Modify UNet2DConditionModel - allow skipping mid_block - adding a norm_group_size argument so that we can set the `num_groups` for group norm using `num_channels//norm_group_size` - allow user to set dimension for the timestep embedding (`time_embed_dim`) - the kernel_size for `conv_in` and `conv_out` is now configurable - add random fourier feature layer (`GaussianFourierProjection`) for `time_proj` - allow user to add the time and class embeddings before passing through the projection layer together - `time_embedding(t_emb + class_label))` - added 2 arguments `attn1_types` and `attn2_types` * currently we have argument `only_cross_attention`: when it's set to `True`, we will have a to the `BasicTransformerBlock` block with 2 cross-attention , otherwise we get a self-attention followed by a cross-attention; in k-upscaler, we need to have blocks that include just one cross-attention, or self-attention -> cross-attention; so I added `attn1_types` and `attn2_types` to the unet's argument list to allow user specify the attention types for the 2 positions in each block; note that I stil kept the `only_cross_attention` argument for unet for easy configuration, but it will be converted to `attn1_type` and `attn2_type` when passing down to the down blocks - the position of downsample layer and upsample layer is now configurable - in k-upscaler unet, there is only one skip connection per each up/down block (instead of each layer in stable diffusion unet), added `skip_freq = "block"` to support this use case - if user passes attention_mask to unet, it will prepare the mask and pass a flag to cross attention processer to skip the `prepare_attention_mask` step inside cross attention block add up/down blocks for k-upscaler modify CrossAttention class - make the `dropout` layer in `to_out` optional - `use_conv_proj` - use conv instead of linear for all projection layers (i.e. `to_q`, `to_k`, `to_v`, `to_out`) whenever possible. note that when it's used to do cross attention, to_k, to_v has to be linear because the `encoder_hidden_states` is not 2d - `cross_attention_norm` - add an optional layernorm on encoder_hidden_states - `attention_dropout`: add an optional dropout on attention score adapt BasicTransformerBlock - add an ada groupnorm layer to conditioning attention input with timestep embedding - allow skipping the FeedForward layer in between the attentions - replaced the only_cross_attention argument with attn1_type and attn2_type for more flexible configuration update timestep embedding: add new act_fn gelu and an optional act_2 modified ResnetBlock2D - refactored with AdaGroupNorm class (the timestep scale shift normalization) - add `mid_channel` argument - allow the first conv to have a different output dimension from the second conv - add option to use input AdaGroupNorm on the input instead of groupnorm - add options to add a dropout layer after each conv - allow user to set the bias in conv_shortcut (needed for k-upscaler) - add gelu adding conversion script for k-upscaler unet add pipeline * fix attention mask * fix a typo * fix a bug * make sure model can be used with GPU * make pipeline work with fp16 * fix an error in BasicTransfomerBlock * make style * fix typo * some more fixes * uP * up * correct more * some clean-up * clean time proj * up * uP * more changes * remove the upcast_attention=True from unet config * remove attn1_types, attn2_types etc * fix * revert incorrect changes up/down samplers * make style * remove outdated files * Apply suggestions from code review * attention refactor * refactor cross attention * Apply suggestions from code review * update * up * update * Apply suggestions from code review * finish * Update src/diffusers/models/cross_attention.py * more fixes * up * up * up * finish * more corrections of conversion state * act_2 -> act_2_fn * remove dropout_after_conv from ResnetBlock2D * make style * simplify KAttentionBlock * add fast test for latent upscaler pipeline * add slow test * slow test fp16 * make style * add doc string for pipeline_stable_diffusion_latent_upscale * add api doc page for latent upscaler pipeline * deprecate attention mask * clean up embeddings * simplify resnet * up * clean up resnet * up * correct more * up * up * improve a bit more * correct more * more clean-ups * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * add docstrings for new unet config * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * # Copied from * encode the image if not latent * remove force casting vae to fp32 * fix * add comments about preconditioning parameters from k-diffusion paper * attn1_type, attn2_type -> add_self_attention * clean up get_down_block and get_up_block * fix * fixed a typo(?) in ada group norm * update slice attention processer for cross attention * update slice * fix fast test * update the checkpoint * finish tests * fix-copies * fix-copy for modeling_text_unet.py * make style * make style * fix f-string * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * fix import * correct changes * fix resnet * make fix-copies * correct euler scheduler * add missing #copied from for preprocess * revert * fix * fix copies * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/models/cross_attention.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * clean up conversion script * KDownsample2d,KUpsample2d -> KDownsample2D,KUpsample2D * more * Update src/diffusers/models/unet_2d_condition.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * remove prepare_extra_step_kwargs * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * fix a typo in timestep embedding * remove num_image_per_prompt * fix fasttest * make style + fix-copies * fix * fix xformer test * fix style * doc string * make style * fix-copies * docstring for time_embedding_norm * make style * final finishes * make fix-copies * fix tests --------- Co-authored-by:
yiyixuxu <yixu@yis-macbook-pro.lan> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 03 Feb, 2023 1 commit
-
-
Jorge C. Gomes authored
Related to #2124 The current implementation is throwing a shape mismatch error. Which makes sense, as this line is obviously missing, comparing to XFormersCrossAttnProcessor and LoRACrossAttnProcessor. I don't have formal tests, but I compared `LoRACrossAttnProcessor` and `LoRAXFormersCrossAttnProcessor` ad-hoc, and they produce the same results with this fix.
-
- 01 Feb, 2023 3 commits
-
-
Patrick von Platen authored
* up * finish
-
Muyang Li authored
The dimension does not match when `inner_dim` is not equal to `in_channels`.
-
Asad Memon authored
-
- 27 Jan, 2023 3 commits
-
-
Patrick von Platen authored
-
Patrick von Platen authored
-
Patrick von Platen authored
* [LoRA] All to use in inference with pipeline * [LoRA] allow cross attention kwargs passed to pipeline * finish
-
- 26 Jan, 2023 4 commits
-
-
Patrick von Platen authored
-
Will Berman authored
* fuse attention mask * lint * use 0 beta when no attention mask re: @Birch-san
-
Suraj Patil authored
* make scaling factor cnfig arg of vae * fix * make flake happy * fix ldm * fix upscaler * qualirty * Apply suggestions from code review Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * solve conflicts, addres some comments * examples * examples min version * doc * fix type * typo * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * remove duplicate line * Apply suggestions from code review Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Anton Lozhkov <anton@huggingface.co> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
Pedro Cuenca authored
* Allow `UNet2DModel` to use arbitrary class embeddings. We can currently use class conditioning in `UNet2DConditionModel`, but not in `UNet2DModel`. However, `UNet2DConditionModel` requires text conditioning too, which is unrelated to other types of conditioning. This commit makes it possible for `UNet2DModel` to be conditioned on entities other than timesteps. This is useful for training / research purposes. We can currently train models to perform unconditional image generation or text-to-image generation, but it's not straightforward to train a model to perform class-conditioned image generation, if text conditioning is not required. We could potentiall use `UNet2DConditionModel` for class-conditioning without text embeddings by using down/up blocks without cross-conditioning. However: - The mid block currently requires cross attention. - We are required to provide `encoder_hidden_states` to `forward`. * Style * Align class conditioning, add docstring for `num_class_embeds`. * Copy docstring to versatile_diffusion UNetFlatConditionModel
-
- 24 Jan, 2023 1 commit
-
-
Takuma Mori authored
* allow passing op to xFormers attention original code by @patil-suraj huggingface/diffusers@ae0cc0b71f28c0f2c5c27026b18f1bea98b505f1 * correct style by `make style` * add attention_op arg documents * add usage example to docstring Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * add usage example to docstring Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * code style correction by `make style` * Update docstring code to a valid python example Co-authored-by:
Suraj Patil <surajp815@gmail.com> * Update docstring code to a valid python example Co-authored-by:
Suraj Patil <surajp815@gmail.com> * style correction by `make style` * Update code exmaple to fully functional Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 19 Jan, 2023 1 commit
-
-
Patrick von Platen authored
correct safetensors
-
- 18 Jan, 2023 1 commit
-
-
Patrick von Platen authored
* [Lora] first upload * add first lora version * upload * more * first training * up * correct * improve * finish loaders and inference * up * up * fix more * up * finish more * finish more * up * up * change year * revert year change * Change lines * Add cloneofsimo as co-author. Co-authored-by:
Simo Ryu <cloneofsimo@gmail.com> * finish * fix docs * Apply suggestions from code review Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com> * upload * finish Co-authored-by:
Simo Ryu <cloneofsimo@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> Co-authored-by:
Suraj Patil <surajp815@gmail.com>
-
- 17 Jan, 2023 1 commit
-
-
Kashif Rasul authored
* added dit model * import * initial pipeline * initial convert script * initial pipeline * make style * raise valueerror * single function * rename classes * use DDIMScheduler * timesteps embedder * samples to cpu * fix var names * fix numpy type * use timesteps class for proj * fix typo * fix arg name * flip_sin_to_cos and better var names * fix C shape cal * make style * remove unused imports * cleanup * add back patch_size * initial dit doc * typo * Update docs/source/api/pipelines/dit.mdx Co-authored-by:
Suraj Patil <surajp815@gmail.com> * added copyright license headers * added example usage and toc * fix variable names asserts * remove comment * added docs * fix typo * upstream changes * set proper device for drop_ids * added initial dit pipeline test * update docs * fix imports * make fix-copies * isort * fix imports * get rid of more magic numbers * fix code when guidance is off * remove block_kwargs * cleanup script * removed to_2tuple * use FeedForward class instead of another MLP * style * work on mergint DiTBlock with BasicTransformerBlock * added missing final_dropout and args to BasicTransformerBlock * use norm from block * fix arg * remove unused arg * fix call to class_embedder * use timesteps * make style * attn_output gets multiplied * removed commented code * use Transformer2D * use self.is_input_patches * fix flags * fixed conversion to use Transformer2DModel * fixes for pipeline * remove dit.py * fix timesteps device * use randn_tensor and fix fp16 inf. * timesteps_emb already the right dtype * fix dit test class * fix test and style * fix norm2 usage in vq-diffusion * added author names to pipeline and lmagenet labels link * fix tests * use norm_type as string * rename dit to transformer * fix name * fix test * set norm_type = "layer" by default * fix tests * do not skip common tests * Update src/diffusers/models/attention.py Co-authored-by:
Suraj Patil <surajp815@gmail.com> * revert AdaLayerNorm API * fix norm_type name * make sure all components are in eval mode * revert norm2 API * compact * finish deprecation * add slow tests * remove @ * refactor some stuff * upload * Update src/diffusers/pipelines/dit/pipeline_dit.py * finish more * finish docs * improve docs * finish docs Co-authored-by:
Suraj Patil <surajp815@gmail.com> Co-authored-by:
William Berman <WLBberman@gmail.com> Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com>
-
- 16 Jan, 2023 2 commits
-
-
Will Berman authored
re: https://github.com/huggingface/diffusers/issues/1857 We relax some of the checks to deal with unclip reproducibility issues. Mainly by checking the average pixel difference (measured w/in 0-255) instead of the max pixel difference (measured w/in 0-1). - [x] add mixin to UnCLIPPipelineFastTests - [x] add mixin to UnCLIPImageVariationPipelineFastTests - [x] Move UnCLIPPipeline flags in mixin to base class - [x] Small MPS fixes for F.pad and F.interpolate - [x] Made test unCLIP model's dimensions smaller to run tests faster
-
Patrick von Platen authored
-
- 12 Jan, 2023 1 commit
-
-
camenduru authored
* from_flax * oops * oops * make style with pip install -e ".[dev]" * oops * now code quality happy
😋 * allow_patterns += FLAX_WEIGHTS_NAME * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/models/modeling_utils.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * Update src/diffusers/pipelines/pipeline_utils.py Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> * for test * bye bye is_flax_available() * oops * Update src/diffusers/models/modeling_pytorch_flax_utils.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/models/modeling_pytorch_flax_utils.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/models/modeling_pytorch_flax_utils.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/models/modeling_utils.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * Update src/diffusers/models/modeling_utils.py Co-authored-by:
Pedro Cuenca <pedro@huggingface.co> * make style * add test * finihs Co-authored-by:
Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by:
Pedro Cuenca <pedro@huggingface.co>
-
- 04 Jan, 2023 1 commit
-
-
Patrick von Platen authored
* [Repro] Correct reproducability * up * up * uP * up * need better image * allow conversion from no state dict checkpoints * up * up * up * up * check tensors * check tensors * check tensors * check tensors * next try * up * up * better name * up * up * Apply suggestions from code review * correct more * up * replace all torch randn * fix * correct * correct * finish * fix more * up
-