Unverified Commit ac373846 authored by Patrick von Platen's avatar Patrick von Platen Committed by GitHub
Browse files

[Docs] Improve docs (#1870)

* [Docs] Improve docs

* up
parent a6e2c1fe
......@@ -30,13 +30,17 @@ Any pipeline object can be saved locally with [`~DiffusionPipeline.save_pretrain
## DiffusionPipeline
[[autodoc]] DiffusionPipeline
- from_pretrained
- save_pretrained
- to
- all
- __call__
- device
- components
- to
## ImagePipelineOutput
By default diffusion pipelines return an object of class
[[autodoc]] pipelines.ImagePipelineOutput
## AudioPipelineOutput
By default diffusion pipelines return an object of class
[[autodoc]] pipelines.AudioPipelineOutput
......@@ -69,15 +69,15 @@ If you want to use all possible use cases in a single `DiffusionPipeline` we rec
## AltDiffusionPipelineOutput
[[autodoc]] pipelines.alt_diffusion.AltDiffusionPipelineOutput
- all
- __call__
## AltDiffusionPipeline
[[autodoc]] AltDiffusionPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
## AltDiffusionImg2ImgPipeline
[[autodoc]] AltDiffusionImg2ImgPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
......@@ -91,12 +91,8 @@ display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate()))
## AudioDiffusionPipeline
[[autodoc]] AudioDiffusionPipeline
- __call__
- encode
- slerp
- all
- __call__
## Mel
[[autodoc]] Mel
- audio_slice_to_image
- image_to_audio
......@@ -96,4 +96,5 @@ image.save("black_to_blue.png")
## CycleDiffusionPipeline
[[autodoc]] CycleDiffusionPipeline
- all
- __call__
......@@ -30,4 +30,5 @@ The original codebase of this implementation can be found [here](https://github.
## DanceDiffusionPipeline
[[autodoc]] DanceDiffusionPipeline
- __call__
- all
- __call__
......@@ -32,4 +32,5 @@ For questions, feel free to contact the author on [tsong.me](https://tsong.me/).
## DDIMPipeline
[[autodoc]] DDIMPipeline
- __call__
- all
- __call__
......@@ -33,4 +33,5 @@ The original codebase of this paper can be found [here](https://github.com/hojon
# DDPMPipeline
[[autodoc]] DDPMPipeline
- __call__
- all
- __call__
......@@ -40,8 +40,10 @@ The original codebase can be found [here](https://github.com/CompVis/latent-diff
## LDMTextToImagePipeline
[[autodoc]] LDMTextToImagePipeline
- __call__
- all
- __call__
## LDMSuperResolutionPipeline
[[autodoc]] LDMSuperResolutionPipeline
- __call__
- all
- __call__
......@@ -38,4 +38,5 @@ The original codebase can be found [here](https://github.com/CompVis/latent-diff
## LDMPipeline
[[autodoc]] LDMPipeline
- __call__
- all
- __call__
......@@ -69,5 +69,6 @@ image
```
## PaintByExamplePipeline
[[autodoc]] pipelines.paint_by_example.pipeline_paint_by_example.PaintByExamplePipeline
- __call__
[[autodoc]] PaintByExamplePipeline
- all
- __call__
......@@ -30,6 +30,6 @@ The original codebase can be found [here](https://github.com/luping-liu/PNDM).
## PNDMPipeline
[[autodoc]] pipelines.pndm.pipeline_pndm.PNDMPipeline
- __call__
[[autodoc]] PNDMPipeline
- all
- __call__
......@@ -72,6 +72,6 @@ inpainted_image = output.images[0]
```
## RePaintPipeline
[[autodoc]] pipelines.repaint.pipeline_repaint.RePaintPipeline
- __call__
[[autodoc]] RePaintPipeline
- all
- __call__
......@@ -32,5 +32,5 @@ This pipeline implements the Variance Expanding (VE) variant of the method.
## ScoreSdeVePipeline
[[autodoc]] ScoreSdeVePipeline
- __call__
- all
- __call__
......@@ -73,16 +73,18 @@ If you want to use all possible use cases in a single `DiffusionPipeline` you ca
## StableDiffusionPipeline
[[autodoc]] StableDiffusionPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
- enable_vae_slicing
- disable_vae_slicing
- enable_xformers_memory_efficient_attention
- disable_xformers_memory_efficient_attention
## StableDiffusionImg2ImgPipeline
[[autodoc]] StableDiffusionImg2ImgPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
......@@ -91,6 +93,7 @@ If you want to use all possible use cases in a single `DiffusionPipeline` you ca
## StableDiffusionInpaintPipeline
[[autodoc]] StableDiffusionInpaintPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
......@@ -99,6 +102,7 @@ If you want to use all possible use cases in a single `DiffusionPipeline` you ca
## StableDiffusionDepth2ImgPipeline
[[autodoc]] StableDiffusionDepth2ImgPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
......@@ -107,15 +111,16 @@ If you want to use all possible use cases in a single `DiffusionPipeline` you ca
## StableDiffusionImageVariationPipeline
[[autodoc]] StableDiffusionImageVariationPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
- enable_xformers_memory_efficient_attention
- disable_xformers_memory_efficient_attention
## StableDiffusionUpscalePipeline
[[autodoc]] StableDiffusionUpscalePipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
......
......@@ -81,10 +81,10 @@ To use a different scheduler, you can either change it via the [`ConfigMixin.fro
## StableDiffusionSafePipelineOutput
[[autodoc]] pipelines.stable_diffusion_safe.StableDiffusionSafePipelineOutput
- all
- __call__
## StableDiffusionPipelineSafe
[[autodoc]] StableDiffusionPipelineSafe
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
......@@ -32,4 +32,5 @@ This pipeline implements the Stochastic sampling tailored to the Variance-Expand
## KarrasVePipeline
[[autodoc]] KarrasVePipeline
- __call__
- all
- __call__
......@@ -24,10 +24,14 @@ The unCLIP model in diffusers comes from kakaobrain's karlo and the original cod
| Pipeline | Tasks | Colab
|---|---|:---:|
| [pipeline_unclip.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unclip/pipeline_unclip.py) | *Text-to-Image Generation* | - |
| [pipeline_unclip_image_variation.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unclip/pipeline_unclip_image_variation.py) | *Image-Guided Image Generation* | - |
## UnCLIPPipeline
[[autodoc]] pipelines.unclip.pipeline_unclip.UnCLIPPipeline
- __call__
[[autodoc]] pipelines.unclip.pipeline_unclip_image_variation.UnCLIPImageVariationPipeline
- __call__
\ No newline at end of file
[[autodoc]] UnCLIPPipeline
- all
- __call__
[[autodoc]] UnCLIPImageVariationPipeline
- all
- __call__
......@@ -56,18 +56,15 @@ To use a different scheduler, you can either change it via the [`ConfigMixin.fro
## VersatileDiffusionTextToImagePipeline
[[autodoc]] VersatileDiffusionTextToImagePipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
## VersatileDiffusionImageVariationPipeline
[[autodoc]] VersatileDiffusionImageVariationPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
## VersatileDiffusionDualGuidedPipeline
[[autodoc]] VersatileDiffusionDualGuidedPipeline
- all
- __call__
- enable_attention_slicing
- disable_attention_slicing
......@@ -30,5 +30,6 @@ The original codebase can be found [here](https://github.com/microsoft/VQ-Diffus
## VQDiffusionPipeline
[[autodoc]] pipelines.vq_diffusion.pipeline_vq_diffusion.VQDiffusionPipeline
- __call__
[[autodoc]] VQDiffusionPipeline
- all
- __call__
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment