Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
renzhc
diffusers_dcu
Commits
908e5e9c
Unverified
Commit
908e5e9c
authored
Jun 15, 2023
by
Patrick von Platen
Committed by
GitHub
Jun 15, 2023
Browse files
Fix some bad comment in training scripts (#3798)
* relax tolerance slightly * correct incorrect naming
parent
27150793
Changes
5
Hide whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
10 additions
and
10 deletions
+10
-10
examples/dreambooth/train_dreambooth.py
examples/dreambooth/train_dreambooth.py
+2
-2
examples/dreambooth/train_dreambooth_lora.py
examples/dreambooth/train_dreambooth_lora.py
+2
-2
examples/text_to_image/train_text_to_image.py
examples/text_to_image/train_text_to_image.py
+2
-2
examples/text_to_image/train_text_to_image_lora.py
examples/text_to_image/train_text_to_image_lora.py
+2
-2
examples/textual_inversion/textual_inversion.py
examples/textual_inversion/textual_inversion.py
+2
-2
No files found.
examples/dreambooth/train_dreambooth.py
View file @
908e5e9c
...
...
@@ -1092,8 +1092,8 @@ def main(args):
unet
,
optimizer
,
train_dataloader
,
lr_scheduler
)
# For mixed precision training we cast
the
text_encoder and
vae weights
to half-precision
# as these
model
s are only used for inference, keeping weights in full precision is not required.
# For mixed precision training we cast
all non-trainable weigths (vae, non-lora
text_encoder and
non-lora unet)
to half-precision
# as these
weight
s are only used for inference, keeping weights in full precision is not required.
weight_dtype
=
torch
.
float32
if
accelerator
.
mixed_precision
==
"fp16"
:
weight_dtype
=
torch
.
float16
...
...
examples/dreambooth/train_dreambooth_lora.py
View file @
908e5e9c
...
...
@@ -790,8 +790,8 @@ def main(args):
text_encoder
.
requires_grad_
(
False
)
unet
.
requires_grad_
(
False
)
# For mixed precision training we cast
the
text_encoder and
vae weights
to half-precision
# as these
model
s are only used for inference, keeping weights in full precision is not required.
# For mixed precision training we cast
all non-trainable weigths (vae, non-lora
text_encoder and
non-lora unet)
to half-precision
# as these
weight
s are only used for inference, keeping weights in full precision is not required.
weight_dtype
=
torch
.
float32
if
accelerator
.
mixed_precision
==
"fp16"
:
weight_dtype
=
torch
.
float16
...
...
examples/text_to_image/train_text_to_image.py
View file @
908e5e9c
...
...
@@ -747,8 +747,8 @@ def main():
if
args
.
use_ema
:
ema_unet
.
to
(
accelerator
.
device
)
# For mixed precision training we cast
the
text_encoder and
vae weights
to half-precision
# as these
model
s are only used for inference, keeping weights in full precision is not required.
# For mixed precision training we cast
all non-trainable weigths (vae, non-lora
text_encoder and
non-lora unet)
to half-precision
# as these
weight
s are only used for inference, keeping weights in full precision is not required.
weight_dtype
=
torch
.
float32
if
accelerator
.
mixed_precision
==
"fp16"
:
weight_dtype
=
torch
.
float16
...
...
examples/text_to_image/train_text_to_image_lora.py
View file @
908e5e9c
...
...
@@ -430,8 +430,8 @@ def main():
text_encoder
.
requires_grad_
(
False
)
# For mixed precision training we cast
the
text_encoder and
vae weights
to half-precision
# as these
model
s are only used for inference, keeping weights in full precision is not required.
# For mixed precision training we cast
all non-trainable weigths (vae, non-lora
text_encoder and
non-lora unet)
to half-precision
# as these
weight
s are only used for inference, keeping weights in full precision is not required.
weight_dtype
=
torch
.
float32
if
accelerator
.
mixed_precision
==
"fp16"
:
weight_dtype
=
torch
.
float16
...
...
examples/textual_inversion/textual_inversion.py
View file @
908e5e9c
...
...
@@ -752,8 +752,8 @@ def main():
text_encoder
,
optimizer
,
train_dataloader
,
lr_scheduler
)
# For mixed precision training we cast
the unet and vae weights
to half-precision
# as these
model
s are only used for inference, keeping weights in full precision is not required.
# For mixed precision training we cast
all non-trainable weigths (vae, non-lora text_encoder and non-lora unet)
to half-precision
# as these
weight
s are only used for inference, keeping weights in full precision is not required.
weight_dtype
=
torch
.
float32
if
accelerator
.
mixed_precision
==
"fp16"
:
weight_dtype
=
torch
.
float16
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment