"docs/git@developer.sourcefind.cn:renzhc/diffusers_dcu.git" did not exist on "bb9e713d02251ac3e0347b49dd2c3da34954dd8f"
Unverified Commit 0ccad2ad authored by Umar's avatar Umar Committed by GitHub
Browse files

Update stable_diffusion.mdx (#3310)

fixed import statement
parent efc48da2
...@@ -153,7 +153,7 @@ def get_inputs(batch_size=1): ...@@ -153,7 +153,7 @@ def get_inputs(batch_size=1):
You'll also need a function that'll display each batch of images: You'll also need a function that'll display each batch of images:
```python ```python
from PIL import image from PIL import Image
def image_grid(imgs, rows=2, cols=2): def image_grid(imgs, rows=2, cols=2):
...@@ -268,4 +268,4 @@ In this tutorial, you learned how to optimize a [`DiffusionPipeline`] for comput ...@@ -268,4 +268,4 @@ In this tutorial, you learned how to optimize a [`DiffusionPipeline`] for comput
- Enable [xFormers](./optimization/xformers) memory efficient attention mechanism for faster speed and reduced memory consumption. - Enable [xFormers](./optimization/xformers) memory efficient attention mechanism for faster speed and reduced memory consumption.
- Learn how in [PyTorch 2.0](./optimization/torch2.0), [`torch.compile`](https://pytorch.org/docs/stable/generated/torch.compile.html) can yield 2-9% faster inference speed. - Learn how in [PyTorch 2.0](./optimization/torch2.0), [`torch.compile`](https://pytorch.org/docs/stable/generated/torch.compile.html) can yield 2-9% faster inference speed.
- Many optimization techniques for inference are also included in this memory and speed [guide](./optimization/fp16), such as memory offloading. - Many optimization techniques for inference are also included in this memory and speed [guide](./optimization/fp16), such as memory offloading.
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment