Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
renzhc
diffusers_dcu
Commits
43f1090a
Unverified
Commit
43f1090a
authored
Aug 21, 2024
by
Steven Liu
Committed by
GitHub
Aug 22, 2024
Browse files
[docs] Network alpha docstring (#9238)
fix docstring Co-authored-by:
Sayak Paul
<
spsayakpaul@gmail.com
>
parent
c2916175
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
18 additions
and
6 deletions
+18
-6
src/diffusers/loaders/lora_pipeline.py
src/diffusers/loaders/lora_pipeline.py
+18
-6
No files found.
src/diffusers/loaders/lora_pipeline.py
View file @
43f1090a
...
@@ -280,7 +280,9 @@ class StableDiffusionLoraLoaderMixin(LoraBaseMixin):
...
@@ -280,7 +280,9 @@ class StableDiffusionLoraLoaderMixin(LoraBaseMixin):
A standard state dict containing the lora layer parameters. The key should be prefixed with an
A standard state dict containing the lora layer parameters. The key should be prefixed with an
additional `text_encoder` to distinguish between unet lora layers.
additional `text_encoder` to distinguish between unet lora layers.
network_alphas (`Dict[str, float]`):
network_alphas (`Dict[str, float]`):
See `LoRALinearLayer` for more details.
The value of the network alpha used for stable learning and preventing underflow. This value has the
same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this
link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning).
text_encoder (`CLIPTextModel`):
text_encoder (`CLIPTextModel`):
The text encoder model to load the LoRA layers into.
The text encoder model to load the LoRA layers into.
prefix (`str`):
prefix (`str`):
...
@@ -753,7 +755,9 @@ class StableDiffusionXLLoraLoaderMixin(LoraBaseMixin):
...
@@ -753,7 +755,9 @@ class StableDiffusionXLLoraLoaderMixin(LoraBaseMixin):
A standard state dict containing the lora layer parameters. The key should be prefixed with an
A standard state dict containing the lora layer parameters. The key should be prefixed with an
additional `text_encoder` to distinguish between unet lora layers.
additional `text_encoder` to distinguish between unet lora layers.
network_alphas (`Dict[str, float]`):
network_alphas (`Dict[str, float]`):
See `LoRALinearLayer` for more details.
The value of the network alpha used for stable learning and preventing underflow. This value has the
same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this
link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning).
text_encoder (`CLIPTextModel`):
text_encoder (`CLIPTextModel`):
The text encoder model to load the LoRA layers into.
The text encoder model to load the LoRA layers into.
prefix (`str`):
prefix (`str`):
...
@@ -1249,7 +1253,9 @@ class SD3LoraLoaderMixin(LoraBaseMixin):
...
@@ -1249,7 +1253,9 @@ class SD3LoraLoaderMixin(LoraBaseMixin):
A standard state dict containing the lora layer parameters. The key should be prefixed with an
A standard state dict containing the lora layer parameters. The key should be prefixed with an
additional `text_encoder` to distinguish between unet lora layers.
additional `text_encoder` to distinguish between unet lora layers.
network_alphas (`Dict[str, float]`):
network_alphas (`Dict[str, float]`):
See `LoRALinearLayer` for more details.
The value of the network alpha used for stable learning and preventing underflow. This value has the
same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this
link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning).
text_encoder (`CLIPTextModel`):
text_encoder (`CLIPTextModel`):
The text encoder model to load the LoRA layers into.
The text encoder model to load the LoRA layers into.
prefix (`str`):
prefix (`str`):
...
@@ -1735,7 +1741,9 @@ class FluxLoraLoaderMixin(LoraBaseMixin):
...
@@ -1735,7 +1741,9 @@ class FluxLoraLoaderMixin(LoraBaseMixin):
A standard state dict containing the lora layer parameters. The key should be prefixed with an
A standard state dict containing the lora layer parameters. The key should be prefixed with an
additional `text_encoder` to distinguish between unet lora layers.
additional `text_encoder` to distinguish between unet lora layers.
network_alphas (`Dict[str, float]`):
network_alphas (`Dict[str, float]`):
See `LoRALinearLayer` for more details.
The value of the network alpha used for stable learning and preventing underflow. This value has the
same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this
link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning).
text_encoder (`CLIPTextModel`):
text_encoder (`CLIPTextModel`):
The text encoder model to load the LoRA layers into.
The text encoder model to load the LoRA layers into.
prefix (`str`):
prefix (`str`):
...
@@ -1968,7 +1976,9 @@ class AmusedLoraLoaderMixin(StableDiffusionLoraLoaderMixin):
...
@@ -1968,7 +1976,9 @@ class AmusedLoraLoaderMixin(StableDiffusionLoraLoaderMixin):
into the unet or prefixed with an additional `unet` which can be used to distinguish between text
into the unet or prefixed with an additional `unet` which can be used to distinguish between text
encoder lora layers.
encoder lora layers.
network_alphas (`Dict[str, float]`):
network_alphas (`Dict[str, float]`):
See `LoRALinearLayer` for more details.
The value of the network alpha used for stable learning and preventing underflow. This value has the
same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this
link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning).
unet (`UNet2DConditionModel`):
unet (`UNet2DConditionModel`):
The UNet model to load the LoRA layers into.
The UNet model to load the LoRA layers into.
adapter_name (`str`, *optional*):
adapter_name (`str`, *optional*):
...
@@ -2061,7 +2071,9 @@ class AmusedLoraLoaderMixin(StableDiffusionLoraLoaderMixin):
...
@@ -2061,7 +2071,9 @@ class AmusedLoraLoaderMixin(StableDiffusionLoraLoaderMixin):
A standard state dict containing the lora layer parameters. The key should be prefixed with an
A standard state dict containing the lora layer parameters. The key should be prefixed with an
additional `text_encoder` to distinguish between unet lora layers.
additional `text_encoder` to distinguish between unet lora layers.
network_alphas (`Dict[str, float]`):
network_alphas (`Dict[str, float]`):
See `LoRALinearLayer` for more details.
The value of the network alpha used for stable learning and preventing underflow. This value has the
same meaning as the `--network_alpha` option in the kohya-ss trainer script. Refer to [this
link](https://github.com/darkstorm2150/sd-scripts/blob/main/docs/train_network_README-en.md#execute-learning).
text_encoder (`CLIPTextModel`):
text_encoder (`CLIPTextModel`):
The text encoder model to load the LoRA layers into.
The text encoder model to load the LoRA layers into.
prefix (`str`):
prefix (`str`):
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment