Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
renzhc
diffusers_dcu
Commits
09d0546a
Unverified
Commit
09d0546a
authored
Nov 16, 2022
by
dblunk88
Committed by
GitHub
Nov 16, 2022
Browse files
cpu offloading: mutli GPU support (#1143)
mutli GPU support
parent
65d136e0
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
2 deletions
+2
-2
src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
...s/pipelines/stable_diffusion/pipeline_stable_diffusion.py
+2
-2
No files found.
src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
View file @
09d0546a
...
...
@@ -178,7 +178,7 @@ class StableDiffusionPipeline(DiffusionPipeline):
# set slice_size = `None` to disable `attention slicing`
self
.
enable_attention_slicing
(
None
)
def
enable_sequential_cpu_offload
(
self
):
def
enable_sequential_cpu_offload
(
self
,
gpu_id
=
0
):
r
"""
Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
...
...
@@ -189,7 +189,7 @@ class StableDiffusionPipeline(DiffusionPipeline):
else
:
raise
ImportError
(
"Please install accelerate via `pip install accelerate`"
)
device
=
torch
.
device
(
"cuda"
)
device
=
torch
.
device
(
f
"cuda
:
{
gpu_id
}
"
)
for
cpu_offloaded_model
in
[
self
.
unet
,
self
.
text_encoder
,
self
.
vae
,
self
.
safety_checker
]:
if
cpu_offloaded_model
is
not
None
:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment