Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
text-generation-inference
Commits
d6af14b2
Commit
d6af14b2
authored
May 22, 2024
by
huangwb
Browse files
fix fHAS_FLASH_ATTN_V2_ROCM flag bug for DCU
parent
5a1cf2f0
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
1 deletion
+1
-1
server/text_generation_server/utils/flash_attn.py
server/text_generation_server/utils/flash_attn.py
+1
-1
No files found.
server/text_generation_server/utils/flash_attn.py
View file @
d6af14b2
...
...
@@ -45,7 +45,7 @@ if IS_CUDA_SYSTEM or IS_ROCM_SYSTEM:
"Use the official Docker image (ghcr.io/huggingface/text-generation-inference:latest) "
f
"or install flash attention v2 with `cd server && make install install-flash-attention-v2
{
architecture_suffix
}
`"
)
if
not
(
is_sm8x
or
is_sm90
):
if
not
(
is_sm8x
or
is_sm90
)
and
IS_CUDA_SYSTEM
:
raise
ImportError
(
f
"GPU with CUDA capability
{
major
}
{
minor
}
is not supported for "
"Flash Attention V2"
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment