Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
Megatron-LM
Commits
50fe58fa
Commit
50fe58fa
authored
Dec 18, 2024
by
wxj
Browse files
Update transformer.py
parent
52610942
Pipeline
#2151
passed with stage
Changes
1
Pipelines
2
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
2 additions
and
2 deletions
+2
-2
megatron/legacy/model/transformer.py
megatron/legacy/model/transformer.py
+2
-2
No files found.
megatron/legacy/model/transformer.py
View file @
50fe58fa
...
...
@@ -582,13 +582,13 @@ class ParallelAttention(MegatronModule):
else
:
kv_projection_size
=
args
.
kv_channels
*
args
.
num_attention_heads
self
.
use_flash_attn
=
(
args
.
use_flash_attn_c
k
or
args
.
use_flash_attn_triton
)
\
self
.
use_flash_attn
=
(
args
.
use_flash_attn_c
utlass
or
args
.
use_flash_attn_triton
)
\
and
attention_type
==
AttnType
.
self_attn
\
and
self
.
attn_mask_type
==
AttnMaskType
.
causal
self
.
use_flash_attn_triton
=
args
.
use_flash_attn_triton
if
self
.
use_flash_attn
:
if
args
.
use_flash_attn_c
k
:
if
args
.
use_flash_attn_c
utlass
:
if
flash_attn_unpadded_func
is
None
:
raise
ImportError
(
'FlashAttention is not installed, please install with '
'pip install flash-attn'
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment