Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
change
sglang
Commits
21e9e63a
Commit
21e9e63a
authored
Dec 17, 2024
by
Lianmin Zheng
Browse files
Print progress bar during cuda graph capture (#2502)
parent
1fc84cf6
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
21 additions
and
1 deletion
+21
-1
docs/references/torch_compile_cache.md
docs/references/torch_compile_cache.md
+13
-0
python/sglang/srt/model_executor/cuda_graph_runner.py
python/sglang/srt/model_executor/cuda_graph_runner.py
+8
-1
No files found.
docs/references/torch_compile_cache.md
0 → 100644
View file @
21e9e63a
# Enabling cache for torch.compile
SGLang uses
`max-autotune-no-cudagraphs`
mode of torch.compile. The auto-tuning can be slow.
If you want to deploy a model on many different machines, you can ship the torch.compile cache to these machines and skip the compilation steps.
This is based on https://pytorch.org/tutorials/recipes/torch_compile_caching_tutorial.html
1.
Generate the cache by setting TORCHINDUCTOR_CACHE_DIR and running the model once.
```
TORCHINDUCTOR_CACHE_DIR=/root/inductor_root_cache python3 -m sglang.launch_server --model meta-llama/Llama-3.1-8B-Instruct --enable-torch-compile
```
2.
Copy the cache folder to other machines and launch the server with
`TORCHINDUCTOR_CACHE_DIR`
.
python/sglang/srt/model_executor/cuda_graph_runner.py
View file @
21e9e63a
...
@@ -20,6 +20,8 @@ from contextlib import contextmanager
...
@@ -20,6 +20,8 @@ from contextlib import contextmanager
from
typing
import
TYPE_CHECKING
,
Callable
from
typing
import
TYPE_CHECKING
,
Callable
import
torch
import
torch
import
tqdm
from
vllm.distributed
import
get_tensor_model_parallel_rank
from
vllm.distributed.parallel_state
import
graph_capture
from
vllm.distributed.parallel_state
import
graph_capture
from
vllm.model_executor.custom_op
import
CustomOp
from
vllm.model_executor.custom_op
import
CustomOp
...
@@ -255,7 +257,12 @@ class CudaGraphRunner:
...
@@ -255,7 +257,12 @@ class CudaGraphRunner:
def
capture
(
self
):
def
capture
(
self
):
with
graph_capture
()
as
graph_capture_context
:
with
graph_capture
()
as
graph_capture_context
:
self
.
stream
=
graph_capture_context
.
stream
self
.
stream
=
graph_capture_context
.
stream
for
bs
in
self
.
capture_bs
:
capture_bs
=
(
tqdm
.
tqdm
(
self
.
capture_bs
)
if
get_tensor_model_parallel_rank
()
==
0
else
self
.
capture_bs
)
for
bs
in
capture_bs
:
with
patch_model
(
with
patch_model
(
self
.
model_runner
.
model
,
self
.
model_runner
.
model
,
bs
in
self
.
compile_bs
,
bs
in
self
.
compile_bs
,
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment