"vscode:/vscode.git/clone" did not exist on "6ee3b296da69cfc516f8c8db2109c3c53cbe7e93"
Unverified Commit 98730c5d authored by Tolga Cangöz's avatar Tolga Cangöz Committed by GitHub
Browse files

Errata (#8322)

* Fix typos

* Trim trailing whitespaces

* Remove a trailing whitespace

* chore: Update MarigoldDepthPipeline checkpoint to prs-eth/marigold-lcm-v1-0

* Revert "chore: Update MarigoldDepthPipeline checkpoint to prs-eth/marigold-lcm-v1-0"

This reverts commit fd742b30b4258106008a6af4d0dd4664904f8595.

* pokemon -> naruto

* `DPMSolverMultistep` -> `DPMSolverMultistepScheduler`

* Improve Markdown stylization

* Improve style

* Improve style

* Refactor pipeline variable names for consistency

* up style
parent 7ebd3594
...@@ -75,7 +75,7 @@ with torch.no_grad(): ...@@ -75,7 +75,7 @@ with torch.no_grad():
prompt_embeds, prompt_attention_mask, negative_embeds, negative_prompt_attention_mask = pipe.encode_prompt(prompt) prompt_embeds, prompt_attention_mask, negative_embeds, negative_prompt_attention_mask = pipe.encode_prompt(prompt)
``` ```
Since text embeddings have been computed, remove the `text_encoder` and `pipe` from the memory, and free up som GPU VRAM: Since text embeddings have been computed, remove the `text_encoder` and `pipe` from the memory, and free up some GPU VRAM:
```python ```python
import gc import gc
...@@ -146,4 +146,3 @@ While loading the `text_encoder`, you set `load_in_8bit` to `True`. You could al ...@@ -146,4 +146,3 @@ While loading the `text_encoder`, you set `load_in_8bit` to `True`. You could al
[[autodoc]] PixArtAlphaPipeline [[autodoc]] PixArtAlphaPipeline
- all - all
- __call__ - __call__
\ No newline at end of file
...@@ -59,7 +59,6 @@ text_encoder = T5EncoderModel.from_pretrained( ...@@ -59,7 +59,6 @@ text_encoder = T5EncoderModel.from_pretrained(
subfolder="text_encoder", subfolder="text_encoder",
load_in_8bit=True, load_in_8bit=True,
device_map="auto", device_map="auto",
) )
pipe = PixArtSigmaPipeline.from_pretrained( pipe = PixArtSigmaPipeline.from_pretrained(
"PixArt-alpha/PixArt-Sigma-XL-2-1024-MS", "PixArt-alpha/PixArt-Sigma-XL-2-1024-MS",
...@@ -77,7 +76,7 @@ with torch.no_grad(): ...@@ -77,7 +76,7 @@ with torch.no_grad():
prompt_embeds, prompt_attention_mask, negative_embeds, negative_prompt_attention_mask = pipe.encode_prompt(prompt) prompt_embeds, prompt_attention_mask, negative_embeds, negative_prompt_attention_mask = pipe.encode_prompt(prompt)
``` ```
Since text embeddings have been computed, remove the `text_encoder` and `pipe` from the memory, and free up som GPU VRAM: Since text embeddings have been computed, remove the `text_encoder` and `pipe` from the memory, and free up some GPU VRAM:
```python ```python
import gc import gc
...@@ -148,4 +147,3 @@ While loading the `text_encoder`, you set `load_in_8bit` to `True`. You could al ...@@ -148,4 +147,3 @@ While loading the `text_encoder`, you set `load_in_8bit` to `True`. You could al
[[autodoc]] PixArtSigmaPipeline [[autodoc]] PixArtSigmaPipeline
- all - all
- __call__ - __call__
\ No newline at end of file
...@@ -177,7 +177,7 @@ inpaint = StableDiffusionInpaintPipeline(**text2img.components) ...@@ -177,7 +177,7 @@ inpaint = StableDiffusionInpaintPipeline(**text2img.components)
The Stable Diffusion pipelines are automatically supported in [Gradio](https://github.com/gradio-app/gradio/), a library that makes creating beautiful and user-friendly machine learning apps on the web a breeze. First, make sure you have Gradio installed: The Stable Diffusion pipelines are automatically supported in [Gradio](https://github.com/gradio-app/gradio/), a library that makes creating beautiful and user-friendly machine learning apps on the web a breeze. First, make sure you have Gradio installed:
``` ```sh
pip install -U gradio pip install -U gradio
``` ```
......
...@@ -12,7 +12,7 @@ specific language governing permissions and limitations under the License. ...@@ -12,7 +12,7 @@ specific language governing permissions and limitations under the License.
# EDMDPMSolverMultistepScheduler # EDMDPMSolverMultistepScheduler
`EDMDPMSolverMultistepScheduler` is a [Karras formulation](https://huggingface.co/papers/2206.00364) of `DPMSolverMultistep`, a multistep scheduler from [DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps](https://huggingface.co/papers/2206.00927) and [DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models](https://huggingface.co/papers/2211.01095) by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. `EDMDPMSolverMultistepScheduler` is a [Karras formulation](https://huggingface.co/papers/2206.00364) of `DPMSolverMultistepScheduler`, a multistep scheduler from [DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps](https://huggingface.co/papers/2206.00927) and [DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models](https://huggingface.co/papers/2211.01095) by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu.
DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality
samples, and it can generate quite good samples even in 10 steps. samples, and it can generate quite good samples even in 10 steps.
......
...@@ -12,7 +12,7 @@ specific language governing permissions and limitations under the License. ...@@ -12,7 +12,7 @@ specific language governing permissions and limitations under the License.
# DPMSolverMultistepScheduler # DPMSolverMultistepScheduler
`DPMSolverMultistep` is a multistep scheduler from [DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps](https://huggingface.co/papers/2206.00927) and [DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models](https://huggingface.co/papers/2211.01095) by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. `DPMSolverMultistepScheduler` is a multistep scheduler from [DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps](https://huggingface.co/papers/2206.00927) and [DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models](https://huggingface.co/papers/2211.01095) by Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu.
DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality DPMSolver (and the improved version DPMSolver++) is a fast dedicated high-order solver for diffusion ODEs with convergence order guarantee. Empirically, DPMSolver sampling with only 20 steps can generate high-quality
samples, and it can generate quite good samples even in 10 steps. samples, and it can generate quite good samples even in 10 steps.
......
...@@ -415,7 +415,7 @@ image = diffusers.utils.load_image( ...@@ -415,7 +415,7 @@ image = diffusers.utils.load_image(
pipe = diffusers.MarigoldDepthPipeline.from_pretrained( pipe = diffusers.MarigoldDepthPipeline.from_pretrained(
"prs-eth/marigold-depth-lcm-v1-0", torch_dtype=torch.float16, variant="fp16" "prs-eth/marigold-depth-lcm-v1-0", torch_dtype=torch.float16, variant="fp16"
).to("cuda") ).to(device)
depth_image = pipe(image, generator=generator).prediction depth_image = pipe(image, generator=generator).prediction
depth_image = pipe.image_processor.visualize_depth(depth_image, color_map="binary") depth_image = pipe.image_processor.visualize_depth(depth_image, color_map="binary")
...@@ -423,10 +423,10 @@ depth_image[0].save("motorcycle_controlnet_depth.png") ...@@ -423,10 +423,10 @@ depth_image[0].save("motorcycle_controlnet_depth.png")
controlnet = diffusers.ControlNetModel.from_pretrained( controlnet = diffusers.ControlNetModel.from_pretrained(
"diffusers/controlnet-depth-sdxl-1.0", torch_dtype=torch.float16, variant="fp16" "diffusers/controlnet-depth-sdxl-1.0", torch_dtype=torch.float16, variant="fp16"
).to("cuda") ).to(device)
pipe = diffusers.StableDiffusionXLControlNetPipeline.from_pretrained( pipe = diffusers.StableDiffusionXLControlNetPipeline.from_pretrained(
"SG161222/RealVisXL_V4.0", torch_dtype=torch.float16, variant="fp16", controlnet=controlnet "SG161222/RealVisXL_V4.0", torch_dtype=torch.float16, variant="fp16", controlnet=controlnet
).to("cuda") ).to(device)
pipe.scheduler = diffusers.DPMSolverMultistepScheduler.from_config(pipe.scheduler.config, use_karras_sigmas=True) pipe.scheduler = diffusers.DPMSolverMultistepScheduler.from_config(pipe.scheduler.config, use_karras_sigmas=True)
controlnet_out = pipe( controlnet_out = pipe(
......
...@@ -34,7 +34,7 @@ Stable Diffusion XL은 Dustin Podell, Zion English, Kyle Lacey, Andreas Blattman ...@@ -34,7 +34,7 @@ Stable Diffusion XL은 Dustin Podell, Zion English, Kyle Lacey, Andreas Blattman
SDXL을 사용하기 전에 `transformers`, `accelerate`, `safetensors``invisible_watermark`를 설치하세요. SDXL을 사용하기 전에 `transformers`, `accelerate`, `safetensors``invisible_watermark`를 설치하세요.
다음과 같이 라이브러리를 설치할 수 있습니다: 다음과 같이 라이브러리를 설치할 수 있습니다:
``` ```sh
pip install transformers pip install transformers
pip install accelerate pip install accelerate
pip install safetensors pip install safetensors
...@@ -46,7 +46,7 @@ pip install invisible-watermark>=0.2.0 ...@@ -46,7 +46,7 @@ pip install invisible-watermark>=0.2.0
Stable Diffusion XL로 이미지를 생성할 때 워터마크가 보이지 않도록 추가하는 것을 권장하는데, 이는 다운스트림(downstream) 어플리케이션에서 기계에 합성되었는지를 식별하는데 도움을 줄 수 있습니다. 그렇게 하려면 [invisible_watermark 라이브러리](https://pypi.org/project/invisible-watermark/)를 통해 설치해주세요: Stable Diffusion XL로 이미지를 생성할 때 워터마크가 보이지 않도록 추가하는 것을 권장하는데, 이는 다운스트림(downstream) 어플리케이션에서 기계에 합성되었는지를 식별하는데 도움을 줄 수 있습니다. 그렇게 하려면 [invisible_watermark 라이브러리](https://pypi.org/project/invisible-watermark/)를 통해 설치해주세요:
``` ```sh
pip install invisible-watermark>=0.2.0 pip install invisible-watermark>=0.2.0
``` ```
...@@ -352,7 +352,7 @@ out-of-memory 에러가 난다면, [`StableDiffusionXLPipeline.enable_model_cpu_ ...@@ -352,7 +352,7 @@ out-of-memory 에러가 난다면, [`StableDiffusionXLPipeline.enable_model_cpu_
**참고** Stable Diffusion XL을 `torch`가 2.0 버전 미만에서 실행시키고 싶을 때, xformers 어텐션을 사용해주세요: **참고** Stable Diffusion XL을 `torch`가 2.0 버전 미만에서 실행시키고 싶을 때, xformers 어텐션을 사용해주세요:
``` ```sh
pip install xformers pip install xformers
``` ```
......
...@@ -93,13 +93,13 @@ cd diffusers ...@@ -93,13 +93,13 @@ cd diffusers
**PyTorch의 경우** **PyTorch의 경우**
``` ```sh
pip install -e ".[torch]" pip install -e ".[torch]"
``` ```
**Flax의 경우** **Flax의 경우**
``` ```sh
pip install -e ".[flax]" pip install -e ".[flax]"
``` ```
......
...@@ -19,7 +19,7 @@ specific language governing permissions and limitations under the License. ...@@ -19,7 +19,7 @@ specific language governing permissions and limitations under the License.
다음 명령어로 ONNX Runtime를 지원하는 🤗 Optimum를 설치합니다: 다음 명령어로 ONNX Runtime를 지원하는 🤗 Optimum를 설치합니다:
``` ```sh
pip install optimum["onnxruntime"] pip install optimum["onnxruntime"]
``` ```
......
...@@ -19,7 +19,7 @@ specific language governing permissions and limitations under the License. ...@@ -19,7 +19,7 @@ specific language governing permissions and limitations under the License.
다음 명령어로 🤗 Optimum을 설치합니다: 다음 명령어로 🤗 Optimum을 설치합니다:
``` ```sh
pip install optimum["openvino"] pip install optimum["openvino"]
``` ```
......
...@@ -59,7 +59,7 @@ image ...@@ -59,7 +59,7 @@ image
먼저 `compel` 라이브러리를 설치해야합니다: 먼저 `compel` 라이브러리를 설치해야합니다:
``` ```sh
pip install compel pip install compel
``` ```
......
...@@ -95,13 +95,13 @@ cd diffusers ...@@ -95,13 +95,13 @@ cd diffusers
**PyTorch** **PyTorch**
``` ```sh
pip install -e ".[torch]" pip install -e ".[torch]"
``` ```
**Flax** **Flax**
``` ```sh
pip install -e ".[flax]" pip install -e ".[flax]"
``` ```
......
...@@ -25,7 +25,7 @@ from diffusers.utils.torch_utils import randn_tensor ...@@ -25,7 +25,7 @@ from diffusers.utils.torch_utils import randn_tensor
EXAMPLE_DOC_STRING = """ EXAMPLE_DOC_STRING = """
Examples: Examples:
``` ```py
from io import BytesIO from io import BytesIO
import requests import requests
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment