Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
text-generation-inference
Commits
0759ec49
Unverified
Commit
0759ec49
authored
Jul 02, 2024
by
Nicolas Patry
Committed by
GitHub
Jul 02, 2024
Browse files
Hotfixing qwen2 and starcoder2 (which also get clamping). (#2167)
parent
963b6c6f
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
2 additions
and
2 deletions
+2
-2
server/text_generation_server/models/custom_modeling/flash_qwen2_modeling.py
...ion_server/models/custom_modeling/flash_qwen2_modeling.py
+1
-1
server/text_generation_server/models/custom_modeling/flash_starcoder2_modeling.py
...erver/models/custom_modeling/flash_starcoder2_modeling.py
+1
-1
No files found.
server/text_generation_server/models/custom_modeling/flash_qwen2_modeling.py
View file @
0759ec49
...
@@ -368,7 +368,7 @@ class Qwen2ForCausalLM(torch.nn.Module):
...
@@ -368,7 +368,7 @@ class Qwen2ForCausalLM(torch.nn.Module):
elif
self
.
max_past
is
not
None
:
elif
self
.
max_past
is
not
None
:
# Clamp in decode mode as paged attention requires clamped values whereas the flash attention
# Clamp in decode mode as paged attention requires clamped values whereas the flash attention
# kernel requires the true values
# kernel requires the true values
input_lengths
=
torch
.
clamp
(
input_lengths
,
max
=
self
.
max_past_tensor
)
input_lengths
=
input_lengths
.
clamp
(
max
=
self
.
max_past_tensor
)
hidden_states
=
self
.
model
(
hidden_states
=
self
.
model
(
input_ids
,
input_ids
,
...
...
server/text_generation_server/models/custom_modeling/flash_starcoder2_modeling.py
View file @
0759ec49
...
@@ -534,7 +534,7 @@ class FlashStarcoder2ForCausalLM(torch.nn.Module):
...
@@ -534,7 +534,7 @@ class FlashStarcoder2ForCausalLM(torch.nn.Module):
elif
self
.
max_past
is
not
None
:
elif
self
.
max_past
is
not
None
:
# Clamp in decode mode as paged attention requires clamped values whereas the flash attention
# Clamp in decode mode as paged attention requires clamped values whereas the flash attention
# kernel requires the true values
# kernel requires the true values
input_lengths
=
torch
.
clamp
(
input_lengths
,
max
=
self
.
max_past_tensor
)
input_lengths
=
input_lengths
.
clamp
(
max
=
self
.
max_past_tensor
)
hidden_states
=
self
.
model
(
hidden_states
=
self
.
model
(
input_ids
,
input_ids
,
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment