Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
ComfyUI
Commits
2da73b70
Commit
2da73b70
authored
Sep 02, 2023
by
Simon Lui
Browse files
Revert changes in comfy/ldm/modules/diffusionmodules/util.py, which is unused.
parent
4a0c4ce4
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
7 additions
and
17 deletions
+7
-17
comfy/ldm/modules/diffusionmodules/util.py
comfy/ldm/modules/diffusionmodules/util.py
+7
-17
No files found.
comfy/ldm/modules/diffusionmodules/util.py
View file @
2da73b70
...
@@ -15,7 +15,6 @@ import torch.nn as nn
...
@@ -15,7 +15,6 @@ import torch.nn as nn
import
numpy
as
np
import
numpy
as
np
from
einops
import
repeat
from
einops
import
repeat
from
comfy
import
model_management
from
comfy.ldm.util
import
instantiate_from_config
from
comfy.ldm.util
import
instantiate_from_config
import
comfy.ops
import
comfy.ops
...
@@ -140,22 +139,13 @@ class CheckpointFunction(torch.autograd.Function):
...
@@ -140,22 +139,13 @@ class CheckpointFunction(torch.autograd.Function):
@
staticmethod
@
staticmethod
def
backward
(
ctx
,
*
output_grads
):
def
backward
(
ctx
,
*
output_grads
):
ctx
.
input_tensors
=
[
x
.
detach
().
requires_grad_
(
True
)
for
x
in
ctx
.
input_tensors
]
ctx
.
input_tensors
=
[
x
.
detach
().
requires_grad_
(
True
)
for
x
in
ctx
.
input_tensors
]
if
model_management
.
is_nvidia
():
with
torch
.
enable_grad
(),
\
with
torch
.
enable_grad
(),
\
torch
.
cuda
.
amp
.
autocast
(
**
ctx
.
gpu_autocast_kwargs
):
torch
.
cuda
.
amp
.
autocast
(
**
ctx
.
gpu_autocast_kwargs
):
# Fixes a bug where the first op in run_function modifies the
# Fixes a bug where the first op in run_function modifies the
# Tensor storage in place, which is not allowed for detach()'d
# Tensor storage in place, which is not allowed for detach()'d
# Tensors.
# Tensors.
shallow_copies
=
[
x
.
view_as
(
x
)
for
x
in
ctx
.
input_tensors
]
shallow_copies
=
[
x
.
view_as
(
x
)
for
x
in
ctx
.
input_tensors
]
output_tensors
=
ctx
.
run_function
(
*
shallow_copies
)
output_tensors
=
ctx
.
run_function
(
*
shallow_copies
)
elif
model_management
.
is_intel_xpu
():
with
torch
.
enable_grad
(),
\
torch
.
xpu
.
amp
.
autocast
(
**
ctx
.
gpu_autocast_kwargs
):
# Fixes a bug where the first op in run_function modifies the
# Tensor storage in place, which is not allowed for detach()'d
# Tensors.
shallow_copies
=
[
x
.
view_as
(
x
)
for
x
in
ctx
.
input_tensors
]
output_tensors
=
ctx
.
run_function
(
*
shallow_copies
)
input_grads
=
torch
.
autograd
.
grad
(
input_grads
=
torch
.
autograd
.
grad
(
output_tensors
,
output_tensors
,
ctx
.
input_tensors
+
ctx
.
input_params
,
ctx
.
input_tensors
+
ctx
.
input_params
,
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment