"...git@developer.sourcefind.cn:renzhc/diffusers_dcu.git" did not exist on "0d7aac3e8df669faf14c9dcce00d324f51acdce8"
Unverified Commit 1328aeb2 authored by Garry Dolley's avatar Garry Dolley Committed by GitHub
Browse files

[Docs] Clarify that these are two separate examples (#5734)

* [Docs] Running the pipeline twice does not appear to be the intention of these examples

One is with `cross_attention_kwargs` and the other (next line) removes it

* [Docs] Clarify that these are two separate examples

One using `scale` and the other without it
parent 53a8439f
...@@ -113,14 +113,15 @@ Load the LoRA weights from your finetuned model *on top of the base model weight ...@@ -113,14 +113,15 @@ Load the LoRA weights from your finetuned model *on top of the base model weight
```py ```py
>>> pipe.unet.load_attn_procs(lora_model_path) >>> pipe.unet.load_attn_procs(lora_model_path)
>>> pipe.to("cuda") >>> pipe.to("cuda")
# use half the weights from the LoRA finetuned model and half the weights from the base model
# use half the weights from the LoRA finetuned model and half the weights from the base model
>>> image = pipe( >>> image = pipe(
... "A pokemon with blue eyes.", num_inference_steps=25, guidance_scale=7.5, cross_attention_kwargs={"scale": 0.5} ... "A pokemon with blue eyes.", num_inference_steps=25, guidance_scale=7.5, cross_attention_kwargs={"scale": 0.5}
... ).images[0] ... ).images[0]
# use the weights from the fully finetuned LoRA model
>>> image = pipe("A pokemon with blue eyes.", num_inference_steps=25, guidance_scale=7.5).images[0] # OR, use the weights from the fully finetuned LoRA model
# >>> image = pipe("A pokemon with blue eyes.", num_inference_steps=25, guidance_scale=7.5).images[0]
>>> image.save("blue_pokemon.png") >>> image.save("blue_pokemon.png")
``` ```
...@@ -225,17 +226,18 @@ Load the LoRA weights from your finetuned DreamBooth model *on top of the base m ...@@ -225,17 +226,18 @@ Load the LoRA weights from your finetuned DreamBooth model *on top of the base m
```py ```py
>>> pipe.unet.load_attn_procs(lora_model_path) >>> pipe.unet.load_attn_procs(lora_model_path)
>>> pipe.to("cuda") >>> pipe.to("cuda")
# use half the weights from the LoRA finetuned model and half the weights from the base model
# use half the weights from the LoRA finetuned model and half the weights from the base model
>>> image = pipe( >>> image = pipe(
... "A picture of a sks dog in a bucket.", ... "A picture of a sks dog in a bucket.",
... num_inference_steps=25, ... num_inference_steps=25,
... guidance_scale=7.5, ... guidance_scale=7.5,
... cross_attention_kwargs={"scale": 0.5}, ... cross_attention_kwargs={"scale": 0.5},
... ).images[0] ... ).images[0]
# use the weights from the fully finetuned LoRA model
>>> image = pipe("A picture of a sks dog in a bucket.", num_inference_steps=25, guidance_scale=7.5).images[0] # OR, use the weights from the fully finetuned LoRA model
# >>> image = pipe("A picture of a sks dog in a bucket.", num_inference_steps=25, guidance_scale=7.5).images[0]
>>> image.save("bucket-dog.png") >>> image.save("bucket-dog.png")
``` ```
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment