@@ -40,7 +40,7 @@ Overall, with these optimizations, we have achieved up to a 7x acceleration in o
<imgsrc="https://lmsys.org/images/blog/sglang_v0_3/deepseek_mla.svg"alt="Multi-head Latent Attention for DeepSeek Series Models">
</p>
**Usage**: MLA optimization is enabled by defalut, to disable, use `--disable-mla`.
**Usage**: MLA optimization is enabled by default, to disable, use `--disable-mla`.
**Reference**: Check [Blog](https://lmsys.org/blog/2024-09-04-sglang-v0-3/#deepseek-multi-head-latent-attention-mla-throughput-optimizations) and [Slides](https://github.com/sgl-project/sgl-learning-materials/blob/main/slides/lmsys_1st_meetup_deepseek_mla.pdf) for more details.
...
...
@@ -52,7 +52,7 @@ Overall, with these optimizations, we have achieved up to a 7x acceleration in o
<imgsrc="https://lmsys.org/images/blog/sglang_v0_4/dp_attention.svg"alt="Data Parallelism Attention for DeepSeek Series Models">
</p>
**Usage**: This optimization is aimed at improving throughput and should be used for scenarios with high QPS (Queries Per Second). Data Parallelism Attention optimization can be enabeld by `--enable-dp-attention` for DeepSeek Series Models.
**Usage**: This optimization is aimed at improving throughput and should be used for scenarios with high QPS (Queries Per Second). Data Parallelism Attention optimization can be enabled by `--enable-dp-attention` for DeepSeek Series Models.