Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
chenpangpang
transformers
Commits
0753134f
Unverified
Commit
0753134f
authored
May 16, 2024
by
Mohit Sharma
Committed by
GitHub
May 16, 2024
Browse files
Disable the FA backend for SDPA on AMD GPUs (#30850)
* disable fa * disable fa * update warning * update warning
parent
9d889f87
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
10 additions
and
0 deletions
+10
-0
src/transformers/modeling_utils.py
src/transformers/modeling_utils.py
+10
-0
No files found.
src/transformers/modeling_utils.py
View file @
0753134f
...
...
@@ -1479,6 +1479,16 @@ class PreTrainedModel(nn.Module, ModuleUtilsMixin, GenerationMixin, PushToHubMix
config
,
hard_check_only
=
False
if
requested_attn_implementation
is
None
else
True
,
)
if
(
torch
.
version
.
hip
is
not
None
and
config
.
_attn_implementation
==
"sdpa"
and
torch
.
cuda
.
device_count
()
>
1
):
logger
.
warning_once
(
"Using the `SDPA` attention implementation on multi-gpu setup with ROCM may lead to performance issues due to the FA backend. Disabling it to use alternative backends."
)
torch
.
backends
.
cuda
.
enable_flash_sdp
(
False
)
else
:
config
.
_attn_implementation
=
"eager"
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment