Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
xdb4_94051
vllm
Commits
30e77528
Unverified
Commit
30e77528
authored
Sep 28, 2023
by
Wang Ran (汪然)
Committed by
GitHub
Sep 27, 2023
Browse files
fix typo (#1184)
Co-authored-by:
Zhuohan Li
<
zhuohan123@gmail.com
>
parent
21877b0d
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
5 additions
and
6 deletions
+5
-6
vllm/engine/llm_engine.py
vllm/engine/llm_engine.py
+2
-2
vllm/engine/ray_utils.py
vllm/engine/ray_utils.py
+3
-4
No files found.
vllm/engine/llm_engine.py
View file @
30e77528
...
@@ -54,8 +54,8 @@ class LLMEngine:
...
@@ -54,8 +54,8 @@ class LLMEngine:
scheduler_config: The configuration related to the request scheduler.
scheduler_config: The configuration related to the request scheduler.
distributed_init_method: The initialization method for distributed
distributed_init_method: The initialization method for distributed
execution. See `torch.distributed.init_process_group` for details.
execution. See `torch.distributed.init_process_group` for details.
stage_devices: The list of devices for each stage. Each stage is a list
placement_group: Ray placement group for distributed execution.
of (rank, node_resource, device) tuples
.
Required for distributed execution
.
log_stats: Whether to log statistics.
log_stats: Whether to log statistics.
"""
"""
...
...
vllm/engine/ray_utils.py
View file @
30e77528
...
@@ -63,11 +63,10 @@ def initialize_cluster(
...
@@ -63,11 +63,10 @@ def initialize_cluster(
the default Ray cluster address.
the default Ray cluster address.
Returns:
Returns:
A tuple of (`distributed_init_method`, `
all_stage_devices
`). The
A tuple of (`distributed_init_method`, `
placement_group
`). The
`distributed_init_method` is the address for initializing the
`distributed_init_method` is the address for initializing the
distributed backend. `all_stage_devices` includes device IDs for
distributed backend. `placement_group` includes the specification
each worker in each pipeline stage. Each device ID is a tuple of
of the resources for each distributed worker.
(rank, node resource, device id).
"""
"""
if
parallel_config
.
worker_use_ray
or
engine_use_ray
:
if
parallel_config
.
worker_use_ray
or
engine_use_ray
:
if
ray
is
None
:
if
ray
is
None
:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment