Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
diffusers
Commits
e86a280c
Unverified
Commit
e86a280c
authored
Nov 07, 2022
by
Pedro Cuenca
Committed by
GitHub
Nov 07, 2022
Browse files
Remove warning about half precision on MPS (#1163)
Remove warning about half precision on MPS.
parent
b4a1ed85
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
6 additions
and
6 deletions
+6
-6
src/diffusers/pipeline_utils.py
src/diffusers/pipeline_utils.py
+6
-6
No files found.
src/diffusers/pipeline_utils.py
View file @
e86a280c
...
@@ -209,13 +209,13 @@ class DiffusionPipeline(ConfigMixin):
...
@@ -209,13 +209,13 @@ class DiffusionPipeline(ConfigMixin):
for
name
in
module_names
.
keys
():
for
name
in
module_names
.
keys
():
module
=
getattr
(
self
,
name
)
module
=
getattr
(
self
,
name
)
if
isinstance
(
module
,
torch
.
nn
.
Module
):
if
isinstance
(
module
,
torch
.
nn
.
Module
):
if
module
.
dtype
==
torch
.
float16
and
str
(
torch_device
)
in
[
"cpu"
,
"mps"
]:
if
module
.
dtype
==
torch
.
float16
and
str
(
torch_device
)
in
[
"cpu"
]:
logger
.
warning
(
logger
.
warning
(
"Pipelines loaded with `torch_dtype=torch.float16` cannot run with `cpu`
or `mps`
device. It"
"Pipelines loaded with `torch_dtype=torch.float16` cannot run with `cpu` device. It"
" is not recommended to move them to `cpu`
or `mps`
as running them will fail. Please make"
" is not recommended to move them to `cpu` as running them will fail. Please make"
" sure to use a
`cuda` device
to run the pipeline in inference
.
due to the lack of
support for
"
" sure to use a
n accelerator
to run the pipeline in inference
,
due to the lack of"
" `float16` operations on th
ose
device
s
in PyTorch. Please remove the"
"
support for
`float16` operations on th
is
device in PyTorch. Please
,
remove the"
" `torch_dtype=torch.float16` argument, or use a
`cuda`
device
to run
inference."
" `torch_dtype=torch.float16` argument, or use a
nother
device
for
inference."
)
)
module
.
to
(
torch_device
)
module
.
to
(
torch_device
)
return
self
return
self
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment