Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
ComfyUI
Commits
8594c8be
"git@developer.sourcefind.cn:chenpangpang/ComfyUI.git" did not exist on "cb7c3a2921cfc0805be0229b4634e1143d60e6fe"
Commit
8594c8be
authored
Oct 22, 2023
by
comfyanonymous
Browse files
Empty the cache when torch cache is more than 25% free mem.
parent
8b65f5de
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
5 additions
and
1 deletion
+5
-1
comfy/model_management.py
comfy/model_management.py
+5
-1
No files found.
comfy/model_management.py
View file @
8594c8be
...
@@ -339,7 +339,11 @@ def free_memory(memory_required, device, keep_loaded=[]):
...
@@ -339,7 +339,11 @@ def free_memory(memory_required, device, keep_loaded=[]):
if
unloaded_model
:
if
unloaded_model
:
soft_empty_cache
()
soft_empty_cache
()
else
:
if
vram_state
!=
VRAMState
.
HIGH_VRAM
:
mem_free_total
,
mem_free_torch
=
get_free_memory
(
device
,
torch_free_too
=
True
)
if
mem_free_torch
>
mem_free_total
*
0.25
:
soft_empty_cache
()
def
load_models_gpu
(
models
,
memory_required
=
0
):
def
load_models_gpu
(
models
,
memory_required
=
0
):
global
vram_state
global
vram_state
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment