Unverified Commit 6b9906f6 authored by Satpal Singh Rathore's avatar Satpal Singh Rathore Committed by GitHub
Browse files

[Docs] Pipelines for inference (#417)

* Update conditional_image_generation.mdx

* Update unconditional_image_generation.mdx
parent a353c46e
......@@ -12,21 +12,39 @@ specific language governing permissions and limitations under the License.
# Quicktour
# Conditional Image Generation
Start using Diffusers🧨 quickly!
To start, use the [`DiffusionPipeline`] for quick inference and sample generations!
The [`DiffusionPipeline`] is the easiest way to use a pre-trained diffusion system for inference
Start by creating an instance of [`DiffusionPipeline`] and specify which pipeline checkpoint you would like to download.
You can use the [`DiffusionPipeline`] for any [Diffusers' checkpoint](https://huggingface.co/models?library=diffusers&sort=downloads).
In this guide though, you'll use [`DiffusionPipeline`] for text-to-image generation with [Latent Diffusion](https://huggingface.co/CompVis/ldm-text2im-large-256):
```python
>>> from diffusers import DiffusionPipeline
>>> generator = DiffusionPipeline.from_pretrained("CompVis/ldm-text2im-large-256")
```
pip install diffusers
The [`DiffusionPipeline`] downloads and caches all modeling, tokenization, and scheduling components.
Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on GPU.
You can move the generator object to GPU, just like you would in PyTorch.
```python
>>> generator.to("cuda")
```
## Main classes
Now you can use the `generator` on your text prompt:
### Models
```python
>>> image = generator("An image of a squirrel in Picasso style").images[0]
```
The output is by default wrapped into a [PIL Image object](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class).
### Schedulers
You can save the image by simply calling:
### Pipeliens
```python
>>> image.save("image_of_squirrel_painting.png")
```
......@@ -12,21 +12,41 @@ specific language governing permissions and limitations under the License.
# Quicktour
# Unonditional Image Generation
Start using Diffusers🧨 quickly!
To start, use the [`DiffusionPipeline`] for quick inference and sample generations!
The [`DiffusionPipeline`] is the easiest way to use a pre-trained diffusion system for inference
Start by creating an instance of [`DiffusionPipeline`] and specify which pipeline checkpoint you would like to download.
You can use the [`DiffusionPipeline`] for any [Diffusers' checkpoint](https://huggingface.co/models?library=diffusers&sort=downloads).
In this guide though, you'll use [`DiffusionPipeline`] for unconditional image generation with [DDPM](https://arxiv.org/abs/2006.11239):
```python
>>> from diffusers import DiffusionPipeline
>>> generator = DiffusionPipeline.from_pretrained("google/ddpm-celebahq-256")
```
The [`DiffusionPipeline`] downloads and caches all modeling, tokenization, and scheduling components.
Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on GPU.
You can move the generator object to GPU, just like you would in PyTorch.
```python
>>> generator.to("cuda")
```
pip install diffusers
Now you can use the `generator` on your text prompt:
```python
>>> image = generator().images[0]
```
## Main classes
The output is by default wrapped into a [PIL Image object](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class).
### Models
You can save the image by simply calling:
```python
>>> image.save("generated_image.png")
```
### Schedulers
### Pipeliens
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment