Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
ComfyUI
Commits
4a77fcd6
"web/git@developer.sourcefind.cn:chenpangpang/ComfyUI.git" did not exist on "086ac75228af3fea96be58ec45c2f6c17286e75d"
Commit
4a77fcd6
authored
Jul 31, 2023
by
comfyanonymous
Browse files
Only shift text encoder to vram when CPU cores are under 8.
parent
3cd31d0e
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
1 deletion
+2
-1
comfy/model_management.py
comfy/model_management.py
+2
-1
No files found.
comfy/model_management.py
View file @
4a77fcd6
...
@@ -364,7 +364,8 @@ def text_encoder_device():
...
@@ -364,7 +364,8 @@ def text_encoder_device():
if
args
.
gpu_only
:
if
args
.
gpu_only
:
return
get_torch_device
()
return
get_torch_device
()
elif
vram_state
==
VRAMState
.
HIGH_VRAM
or
vram_state
==
VRAMState
.
NORMAL_VRAM
:
elif
vram_state
==
VRAMState
.
HIGH_VRAM
or
vram_state
==
VRAMState
.
NORMAL_VRAM
:
if
torch
.
get_num_threads
()
<
4
:
#leaving the text encoder on the CPU is faster than shifting it if the CPU is fast enough.
#NOTE: on a Ryzen 5 7600X with 4080 it's faster to shift to GPU
if
torch
.
get_num_threads
()
<
8
:
#leaving the text encoder on the CPU is faster than shifting it if the CPU is fast enough.
return
get_torch_device
()
return
get_torch_device
()
else
:
else
:
return
torch
.
device
(
"cpu"
)
return
torch
.
device
(
"cpu"
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment