Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
xdb4_94051
vllm
Commits
4b5bcf89
Unverified
Commit
4b5bcf89
authored
Sep 08, 2023
by
Robert Irvine
Committed by
GitHub
Sep 08, 2023
Browse files
faster startup of vLLM (#982)
* update --------- Co-authored-by:
Robert Irvine
<
robert@seamlessml.com
>
parent
852ef5b4
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
3 additions
and
2 deletions
+3
-2
vllm/model_executor/layers/attention.py
vllm/model_executor/layers/attention.py
+3
-2
No files found.
vllm/model_executor/layers/attention.py
View file @
4b5bcf89
...
...
@@ -259,8 +259,9 @@ class PagedAttentionWithRoPE(PagedAttention):
self
.
is_neox_style
=
is_neox_style
# Create the cos and sin cache.
inv_freq
=
1.0
/
(
base
**
(
torch
.
arange
(
0
,
rotary_dim
,
2
)
/
rotary_dim
))
t
=
torch
.
arange
(
max_position
).
float
()
inv_freq
=
1.0
/
(
base
**
(
torch
.
arange
(
0
,
rotary_dim
,
2
,
device
=
"cuda"
)
/
rotary_dim
))
t
=
torch
.
arange
(
max_position
,
device
=
"cuda"
).
float
()
freqs
=
torch
.
einsum
(
"i,j -> ij"
,
t
,
inv_freq
.
float
())
cos
=
freqs
.
cos
()
sin
=
freqs
.
sin
()
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment