Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
e201864b
Unverified
Commit
e201864b
authored
Jan 22, 2024
by
Younes Belkada
Committed by
GitHub
Jan 22, 2024
Browse files
[`GPTNeoX`] Fix GPTNeoX + Flash Attention 2 issue (#28645)
Update modeling_gpt_neox.py
parent
dafd5951
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
1 deletion
+1
-1
src/transformers/models/gpt_neox/modeling_gpt_neox.py
src/transformers/models/gpt_neox/modeling_gpt_neox.py
+1
-1
No files found.
src/transformers/models/gpt_neox/modeling_gpt_neox.py
View file @
e201864b
...
@@ -390,7 +390,7 @@ class GPTNeoXFlashAttention2(GPTNeoXAttention):
...
@@ -390,7 +390,7 @@ class GPTNeoXFlashAttention2(GPTNeoXAttention):
elif
hasattr
(
self
.
config
,
"_pre_quantization_dtype"
):
elif
hasattr
(
self
.
config
,
"_pre_quantization_dtype"
):
target_dtype
=
self
.
config
.
_pre_quantization_dtype
target_dtype
=
self
.
config
.
_pre_quantization_dtype
else
:
else
:
target_dtype
=
self
.
q
_proj
.
weight
.
dtype
target_dtype
=
self
.
q
uery_key_value
.
weight
.
dtype
logger
.
warning_once
(
logger
.
warning_once
(
f
"The input hidden states seems to be silently casted in float32, this might be related to"
f
"The input hidden states seems to be silently casted in float32, this might be related to"
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment