Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
ComfyUI
Commits
efb704c7
Commit
efb704c7
authored
Dec 07, 2023
by
comfyanonymous
Browse files
Support attention masking in CLIP implementation.
parent
248d9125
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
6 additions
and
2 deletions
+6
-2
comfy/clip_model.py
comfy/clip_model.py
+6
-2
No files found.
comfy/clip_model.py
View file @
efb704c7
...
...
@@ -100,8 +100,12 @@ class CLIPTextModel_(torch.nn.Module):
def
forward
(
self
,
input_tokens
,
attention_mask
=
None
,
intermediate_output
=
None
,
final_layer_norm_intermediate
=
True
):
x
=
self
.
embeddings
(
input_tokens
)
#TODO: attention_mask
x
,
i
=
self
.
encoder
(
x
,
intermediate_output
=
intermediate_output
)
mask
=
None
if
attention_mask
is
not
None
:
mask
=
1.0
-
attention_mask
.
to
(
x
.
dtype
).
unsqueeze
(
1
).
unsqueeze
(
1
).
expand
(
attention_mask
.
shape
[
0
],
1
,
attention_mask
.
shape
[
-
1
],
attention_mask
.
shape
[
-
1
])
mask
=
mask
.
masked_fill
(
mask
.
to
(
torch
.
bool
),
float
(
"-inf"
))
x
,
i
=
self
.
encoder
(
x
,
mask
=
mask
,
intermediate_output
=
intermediate_output
)
x
=
self
.
final_layer_norm
(
x
)
if
i
is
not
None
and
final_layer_norm_intermediate
:
i
=
self
.
final_layer_norm
(
i
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment