Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
norm
vllm
Commits
7b6844e5
Commit
7b6844e5
authored
Feb 22, 2023
by
Woosuk Kwon
Browse files
Add input metadata
parent
608f74ff
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
28 additions
and
2 deletions
+28
-2
cacheflow/models/__init__.py
cacheflow/models/__init__.py
+3
-2
cacheflow/models/input_metadata.py
cacheflow/models/input_metadata.py
+25
-0
No files found.
cacheflow/models/__init__.py
View file @
7b6844e5
from
cacheflow.worker.models.model_utils
import
get_model
from
cacheflow.models.input_metadata
import
InputMetadata
from
cacheflow.models.model_utils
import
get_model
__all__
=
[
__all__
=
[
'get_model'
,
'get_model'
,
'InputMetadata'
,
]
]
cacheflow/models/input_metadata.py
0 → 100644
View file @
7b6844e5
from
typing
import
List
import
torch
class
InputMetadata
:
def
__init__
(
self
,
prompt_lens
:
List
[
int
],
slot_mapping
:
torch
.
Tensor
,
context_lens
:
torch
.
Tensor
,
max_context_len
:
int
,
block_tables
:
torch
.
Tensor
,
)
->
None
:
self
.
prompt_lens
=
prompt_lens
self
.
prompt_block_table
=
slot_mapping
self
.
context_lens
=
context_lens
self
.
max_context_len
=
max_context_len
self
.
block_tables
=
block_tables
self
.
num_prompts
=
len
(
prompt_lens
)
self
.
num_generation_tokens
=
context_lens
.
shape
[
0
]
self
.
max_num_blocks_per_seq
=
block_tables
.
shape
[
1
]
assert
self
.
num_generation_tokens
==
block_tables
.
shape
[
0
]
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment