Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
gaoqiong
flash-attention
Commits
8dd52b07
Unverified
Commit
8dd52b07
authored
Oct 06, 2022
by
Tri Dao
Committed by
GitHub
Oct 06, 2022
Browse files
Merge pull request #55 from ajfadam/main
remove numpy dependency
parents
88dc2040
4e38df05
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
4 deletions
+2
-4
flash_attn/bert_padding.py
flash_attn/bert_padding.py
+2
-4
No files found.
flash_attn/bert_padding.py
View file @
8dd52b07
# Adapted from https://github.com/mlcommons/training_results_v1.1/blob/main/NVIDIA/benchmarks/bert/implementations/pytorch/padding.py
import
numpy
as
np
import
torch
import
torch.nn.functional
as
F
...
...
@@ -15,7 +13,7 @@ class IndexFirstAxis(torch.autograd.Function):
ctx
.
save_for_backward
(
indices
)
assert
input
.
ndim
>=
2
ctx
.
first_axis_dim
,
other_shape
=
input
.
shape
[
0
],
input
.
shape
[
1
:]
second_dim
=
np
.
prod
(
other_shape
)
second_dim
=
other_shape
.
numel
(
)
# TD [2022-03-04] For some reason torch.gather is a bit faster than indexing.
# return input[indices]
return
torch
.
gather
(
rearrange
(
input
,
'b ... -> b (...)'
),
0
,
...
...
@@ -71,7 +69,7 @@ class IndexFirstAxisResidual(torch.autograd.Function):
ctx
.
save_for_backward
(
indices
)
assert
input
.
ndim
>=
2
ctx
.
first_axis_dim
,
other_shape
=
input
.
shape
[
0
],
input
.
shape
[
1
:]
second_dim
=
np
.
prod
(
other_shape
)
second_dim
=
other_shape
.
numel
(
)
# TD [2022-03-04] For some reason torch.gather is a bit faster than indexing.
output
=
input
[
indices
]
# We don't want to reshape input (b ... -> b (...)) since it could change the channel_last
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment