Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
a28a7699
Unverified
Commit
a28a7699
authored
Jan 27, 2024
by
Joao Gante
Committed by
GitHub
Jan 27, 2024
Browse files
Falcon: removed unused function (#28605)
parent
de13a951
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
0 additions
and
11 deletions
+0
-11
src/transformers/models/falcon/modeling_falcon.py
src/transformers/models/falcon/modeling_falcon.py
+0
-11
No files found.
src/transformers/models/falcon/modeling_falcon.py
View file @
a28a7699
...
...
@@ -214,17 +214,6 @@ class FalconDynamicNTKScalingRotaryEmbedding(FalconRotaryEmbedding):
self
.
register_buffer
(
"sin_cached"
,
emb
.
sin
().
to
(
dtype
),
persistent
=
False
)
def
_prepare_4d_attention_mask
(
mask
:
torch
.
Tensor
,
past_key_values_length
:
int
)
->
torch
.
BoolTensor
:
"""
Expands attention_mask from `[batch_size, seq_length]` to `[batch_size, 1, seq_length, seq_length + past_length]`.
"""
batch_size
,
total_length
=
mask
.
shape
seq_length
=
total_length
-
past_key_values_length
if
past_key_values_length
is
not
None
else
total_length
expanded_mask
=
~
(
mask
[:,
None
,
None
,
:].
to
(
torch
.
bool
))
return
expanded_mask
.
expand
(
batch_size
,
1
,
seq_length
,
total_length
)
def
build_alibi_tensor
(
attention_mask
:
torch
.
Tensor
,
num_heads
:
int
,
dtype
:
torch
.
dtype
)
->
torch
.
Tensor
:
batch_size
,
seq_length
=
attention_mask
.
shape
closest_power_of_2
=
2
**
math
.
floor
(
math
.
log2
(
num_heads
))
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment