Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
ComfyUI
Commits
24129d78
"git@developer.sourcefind.cn:chenpangpang/ComfyUI.git" did not exist on "02ffbb2de3e33d9d64d38c13e70e860d9af90101"
Commit
24129d78
authored
Feb 04, 2024
by
comfyanonymous
Browse files
Speed up SDXL on 16xx series with fp16 weights and manual cast.
parent
98b80ad1
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
3 additions
and
3 deletions
+3
-3
comfy/model_management.py
comfy/model_management.py
+3
-3
No files found.
comfy/model_management.py
View file @
24129d78
...
@@ -496,7 +496,7 @@ def unet_dtype(device=None, model_params=0):
...
@@ -496,7 +496,7 @@ def unet_dtype(device=None, model_params=0):
return
torch
.
float8_e4m3fn
return
torch
.
float8_e4m3fn
if
args
.
fp8_e5m2_unet
:
if
args
.
fp8_e5m2_unet
:
return
torch
.
float8_e5m2
return
torch
.
float8_e5m2
if
should_use_fp16
(
device
=
device
,
model_params
=
model_params
):
if
should_use_fp16
(
device
=
device
,
model_params
=
model_params
,
manual_cast
=
True
):
return
torch
.
float16
return
torch
.
float16
return
torch
.
float32
return
torch
.
float32
...
@@ -696,7 +696,7 @@ def is_device_mps(device):
...
@@ -696,7 +696,7 @@ def is_device_mps(device):
return
True
return
True
return
False
return
False
def
should_use_fp16
(
device
=
None
,
model_params
=
0
,
prioritize_performance
=
True
):
def
should_use_fp16
(
device
=
None
,
model_params
=
0
,
prioritize_performance
=
True
,
manual_cast
=
False
):
global
directml_enabled
global
directml_enabled
if
device
is
not
None
:
if
device
is
not
None
:
...
@@ -738,7 +738,7 @@ def should_use_fp16(device=None, model_params=0, prioritize_performance=True):
...
@@ -738,7 +738,7 @@ def should_use_fp16(device=None, model_params=0, prioritize_performance=True):
if
x
in
props
.
name
.
lower
():
if
x
in
props
.
name
.
lower
():
fp16_works
=
True
fp16_works
=
True
if
fp16_works
:
if
fp16_works
or
manual_cast
:
free_model_memory
=
(
get_free_memory
()
*
0.9
-
minimum_inference_memory
())
free_model_memory
=
(
get_free_memory
()
*
0.9
-
minimum_inference_memory
())
if
(
not
prioritize_performance
)
or
model_params
*
4
>
free_model_memory
:
if
(
not
prioritize_performance
)
or
model_params
*
4
>
free_model_memory
:
return
True
return
True
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment