Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
41c42f85
Unverified
Commit
41c42f85
authored
Oct 17, 2023
by
Younes Belkada
Committed by
GitHub
Oct 17, 2023
Browse files
[`FA2`] Fix flash attention 2 fine-tuning with Falcon (#26852)
fix fa2 + dropout issue
parent
4b423e60
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
5 additions
and
1 deletion
+5
-1
src/transformers/models/falcon/modeling_falcon.py
src/transformers/models/falcon/modeling_falcon.py
+1
-1
tests/test_modeling_common.py
tests/test_modeling_common.py
+4
-0
No files found.
src/transformers/models/falcon/modeling_falcon.py
View file @
41c42f85
...
...
@@ -606,7 +606,7 @@ class FalconFlashAttention2(FalconAttention):
if
alibi
is
not
None
:
raise
ValueError
(
"`alibi` is not supported when `use_flash_attn` is True"
)
attn_dropout
=
self
.
attention_dropout
if
self
.
training
else
0.0
attn_dropout
=
self
.
config
.
attention_dropout
if
self
.
training
else
0.0
# In PEFT, usually we cast the layer norms in float32 for training stability reasons
# therefore the input hidden states gets silently casted in float32. Hence, we need
...
...
tests/test_modeling_common.py
View file @
41c42f85
...
...
@@ -2810,6 +2810,10 @@ class ModelTesterMixin:
self
.
assertTrue
(
torch
.
allclose
(
logits_fa
[
1
:],
logits
[
1
:],
atol
=
4e-2
,
rtol
=
4e-2
))
# check with inference + dropout
model
.
train
()
_
=
model_fa
(
dummy_input
,
attention_mask
=
dummy_attention_mask
,
output_hidden_states
=
True
)
@
require_flash_attn
@
require_torch_gpu
@
mark
.
flash_attn_test
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment