Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
Megatron-LM
Commits
4916bae6
Commit
4916bae6
authored
Feb 04, 2021
by
Vijay Korthikanti
Browse files
conditioning fused kernels
parent
872e38ea
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
3 additions
and
1 deletion
+3
-1
megatron/model/fused_softmax.py
megatron/model/fused_softmax.py
+3
-1
No files found.
megatron/model/fused_softmax.py
View file @
4916bae6
...
@@ -119,11 +119,13 @@ class FusedScaleMaskSoftmax(torch.nn.Module):
...
@@ -119,11 +119,13 @@ class FusedScaleMaskSoftmax(torch.nn.Module):
data_size
=
input
.
size
()
data_size
=
input
.
size
()
query_seq_len
=
data_size
[
-
2
]
query_seq_len
=
data_size
[
-
2
]
key_seq_len
=
data_size
[
-
1
]
key_seq_len
=
data_size
[
-
1
]
attn_batch_size
=
data_size
[
0
]
*
data_size
[
1
]
assert
input
.
dim
()
==
4
assert
input
.
dim
()
==
4
# invoke custom kernel
# invoke custom kernel
if
self
.
input_in_fp16
and
key_seq_len
<=
2048
and
mask
is
not
None
and
\
if
self
.
input_in_fp16
and
key_seq_len
<=
2048
and
mask
is
not
None
and
\
query_seq_len
%
4
==
0
and
self
.
scaled_masked_softmax_fusion
:
query_seq_len
%
4
==
0
and
key_seq_len
>
16
and
\
attn_batch_size
%
4
==
0
and
self
.
scaled_masked_softmax_fusion
:
scale
=
self
.
scale
if
self
.
scale
is
not
None
else
1.0
scale
=
self
.
scale
if
self
.
scale
is
not
None
else
1.0
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment