Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
apex
Commits
980d5f44
Commit
980d5f44
authored
Feb 16, 2022
by
hubertlu-tw
Browse files
Fix torch._softmax_backward_data arguments
parent
5de49cc9
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
4 additions
and
2 deletions
+4
-2
apex/contrib/multihead_attn/encdec_multihead_attn_func.py
apex/contrib/multihead_attn/encdec_multihead_attn_func.py
+2
-1
apex/contrib/multihead_attn/self_multihead_attn_func.py
apex/contrib/multihead_attn/self_multihead_attn_func.py
+2
-1
No files found.
apex/contrib/multihead_attn/encdec_multihead_attn_func.py
View file @
980d5f44
...
@@ -206,7 +206,8 @@ class EncdecAttnFunc(torch.autograd.Function):
...
@@ -206,7 +206,8 @@ class EncdecAttnFunc(torch.autograd.Function):
dropout_grads
=
torch
.
_masked_scale
(
matmul2_dgrad1
,
dropout_mask
,
1.0
/
(
1.0
-
dropout_prob_t
[
0
]))
dropout_grads
=
torch
.
_masked_scale
(
matmul2_dgrad1
,
dropout_mask
,
1.0
/
(
1.0
-
dropout_prob_t
[
0
]))
# Softmax Grad (not a publically documented op)
# Softmax Grad (not a publically documented op)
softmax_grads
=
torch
.
_softmax_backward_data
(
dropout_grads
,
softmax_results
,
-
1
,
softmax_results
)
### softmax_grads = torch._softmax_backward_data(dropout_grads, softmax_results, -1, softmax_results) # og
softmax_grads
=
torch
.
_softmax_backward_data
(
dropout_grads
,
softmax_results
,
-
1
,
torch
.
float32
,
grad_input
=
softmax_results
)
# Matmul1 - DGRAD1
# Matmul1 - DGRAD1
# Input1: (data grads) [seqs*heads, seql_q, seql_k]
# Input1: (data grads) [seqs*heads, seql_q, seql_k]
...
...
apex/contrib/multihead_attn/self_multihead_attn_func.py
View file @
980d5f44
...
@@ -189,7 +189,8 @@ class SelfAttnFunc(torch.autograd.Function):
...
@@ -189,7 +189,8 @@ class SelfAttnFunc(torch.autograd.Function):
dropout_grads
=
torch
.
_masked_scale
(
matmul2_dgrad1
,
dropout_mask
,
1.0
/
(
1.0
-
dropout_prob_t
[
0
]))
dropout_grads
=
torch
.
_masked_scale
(
matmul2_dgrad1
,
dropout_mask
,
1.0
/
(
1.0
-
dropout_prob_t
[
0
]))
# Softmax Grad (not a publically documented op)
# Softmax Grad (not a publically documented op)
softmax_grads
=
torch
.
_softmax_backward_data
(
dropout_grads
,
softmax_results
,
-
1
,
softmax_results
)
### softmax_grads = torch._softmax_backward_data(dropout_grads, softmax_results, -1, softmax_results) # og
softmax_grads
=
torch
.
_softmax_backward_data
(
dropout_grads
,
softmax_results
,
-
1
,
torch
.
float32
,
grad_input
=
softmax_results
)
# Matmul1 - DGRAD1
# Matmul1 - DGRAD1
# Input1: (data grads) [seqs*heads, seql_q, seql_k]
# Input1: (data grads) [seqs*heads, seql_q, seql_k]
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment