Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
text-generation-inference
Commits
37555cf4
Unverified
Commit
37555cf4
authored
Dec 15, 2023
by
OlivierDehaene
Committed by
GitHub
Dec 15, 2023
Browse files
fix: max_past default value must be -1, not 0 (#1348)
parent
9b78a6ee
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
5 additions
and
2 deletions
+5
-2
server/text_generation_server/models/custom_modeling/flash_mistral_modeling.py
...n_server/models/custom_modeling/flash_mistral_modeling.py
+1
-1
server/text_generation_server/models/custom_modeling/flash_mixtral_modeling.py
...n_server/models/custom_modeling/flash_mixtral_modeling.py
+1
-1
server/text_generation_server/utils/flash_attn.py
server/text_generation_server/utils/flash_attn.py
+3
-0
No files found.
server/text_generation_server/models/custom_modeling/flash_mistral_modeling.py
View file @
37555cf4
...
@@ -149,7 +149,7 @@ class MistralAttention(torch.nn.Module):
...
@@ -149,7 +149,7 @@ class MistralAttention(torch.nn.Module):
):
):
super
().
__init__
()
super
().
__init__
()
self
.
max_past
=
(
self
.
max_past
=
(
config
.
sliding_window
if
config
.
sliding_window
is
not
None
else
0
config
.
sliding_window
if
config
.
sliding_window
is
not
None
else
-
1
)
)
self
.
num_heads
=
config
.
num_attention_heads
self
.
num_heads
=
config
.
num_attention_heads
self
.
hidden_size
=
config
.
hidden_size
self
.
hidden_size
=
config
.
hidden_size
...
...
server/text_generation_server/models/custom_modeling/flash_mixtral_modeling.py
View file @
37555cf4
...
@@ -204,7 +204,7 @@ class MixtralAttention(torch.nn.Module):
...
@@ -204,7 +204,7 @@ class MixtralAttention(torch.nn.Module):
):
):
super
().
__init__
()
super
().
__init__
()
self
.
max_past
=
(
self
.
max_past
=
(
config
.
sliding_window
if
config
.
sliding_window
is
not
None
else
0
config
.
sliding_window
if
config
.
sliding_window
is
not
None
else
-
1
)
)
self
.
num_heads
=
config
.
num_attention_heads
self
.
num_heads
=
config
.
num_attention_heads
self
.
hidden_size
=
config
.
hidden_size
self
.
hidden_size
=
config
.
hidden_size
...
...
server/text_generation_server/utils/flash_attn.py
View file @
37555cf4
...
@@ -72,6 +72,9 @@ def attention(
...
@@ -72,6 +72,9 @@ def attention(
softmax_scale
,
softmax_scale
,
window_size_left
=-
1
,
window_size_left
=-
1
,
):
):
if
window_size_left
<=
0
and
window_size_left
!=
-
1
:
raise
ValueError
(
"`window_size_left` must be > 0 or -1"
)
if
HAS_FLASH_ATTN_V2_CUDA
:
if
HAS_FLASH_ATTN_V2_CUDA
:
return
flash_attn_2_cuda
.
varlen_fwd
(
return
flash_attn_2_cuda
.
varlen_fwd
(
q
,
q
,
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment