Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
norm
vllm
Commits
a2a9869c
Commit
a2a9869c
authored
Feb 12, 2023
by
Woosuk Kwon
Browse files
SERVING -> RUNNING
parent
46958cf9
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
2 additions
and
2 deletions
+2
-2
cacheflow/master/block_manager.py
cacheflow/master/block_manager.py
+1
-1
cacheflow/sequence.py
cacheflow/sequence.py
+1
-1
No files found.
cacheflow/master/block_manager.py
View file @
a2a9869c
...
...
@@ -95,7 +95,7 @@ class BlockSpaceManager:
# Simple heuristic: If there is at least one free block
# for each sequence, we can append.
num_free_gpu_blocks
=
self
.
gpu_allocator
.
get_num_free_blocks
()
num_seqs
=
seq_group
.
num_seqs
(
status
=
SequenceStatus
.
SERV
ING
)
num_seqs
=
seq_group
.
num_seqs
(
status
=
SequenceStatus
.
RUNN
ING
)
return
num_seqs
<=
num_free_gpu_blocks
def
append
(
self
,
seq
:
Sequence
)
->
Optional
[
Tuple
[
int
,
int
]]:
...
...
cacheflow/sequence.py
View file @
a2a9869c
...
...
@@ -7,7 +7,7 @@ from cacheflow.decoding import DecodingParams
class
SequenceStatus
(
enum
.
Enum
):
PENDING
=
enum
.
auto
()
SERV
ING
=
enum
.
auto
()
RUNN
ING
=
enum
.
auto
()
SWAPPED
=
enum
.
auto
()
FINISHED
=
enum
.
auto
()
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment