Unverified Commit 13ec8d42 authored by Xinyuan Tong's avatar Xinyuan Tong Committed by GitHub
Browse files

[Docs]Update reasoning parser doc & fix outdated link (#9492)


Signed-off-by: default avatarXinyuan Tong <xinyuantong.cs@gmail.com>
parent 05bd7897
...@@ -13,6 +13,7 @@ ...@@ -13,6 +13,7 @@
"| Model | Reasoning tags | Parser | Notes |\n", "| Model | Reasoning tags | Parser | Notes |\n",
"|---------|-----------------------------|------------------|-------|\n", "|---------|-----------------------------|------------------|-------|\n",
"| [DeepSeek‑R1 series](https://huggingface.co/collections/deepseek-ai/deepseek-r1-678e1e131c0169c0bc89728d) | `<think>` … `</think>` | `deepseek-r1` | Supports all variants (R1, R1-0528, R1-Distill) |\n", "| [DeepSeek‑R1 series](https://huggingface.co/collections/deepseek-ai/deepseek-r1-678e1e131c0169c0bc89728d) | `<think>` … `</think>` | `deepseek-r1` | Supports all variants (R1, R1-0528, R1-Distill) |\n",
"| [DeepSeek‑V3.1](https://huggingface.co/deepseek-ai/DeepSeek-V3.1) | `<think>` … `</think>` | `deepseek-v3` | Supports `thinking` parameter |\n",
"| [Standard Qwen3 models](https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2e4f653967f) | `<think>` … `</think>` | `qwen3` | Supports `enable_thinking` parameter |\n", "| [Standard Qwen3 models](https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2e4f653967f) | `<think>` … `</think>` | `qwen3` | Supports `enable_thinking` parameter |\n",
"| [Qwen3-Thinking models](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507) | `<think>` … `</think>` | `qwen3` or `qwen3-thinking` | Always generates thinking content |\n", "| [Qwen3-Thinking models](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507) | `<think>` … `</think>` | `qwen3` or `qwen3-thinking` | Always generates thinking content |\n",
"| [Kimi models](https://huggingface.co/moonshotai/models) | `◁think▷` … `◁/think▷` | `kimi` | Uses special thinking delimiters |\n", "| [Kimi models](https://huggingface.co/moonshotai/models) | `◁think▷` … `◁/think▷` | `kimi` | Uses special thinking delimiters |\n",
...@@ -24,6 +25,9 @@ ...@@ -24,6 +25,9 @@
"- DeepSeek-R1-0528: Generates both `<think>` start and `</think>` end tags\n", "- DeepSeek-R1-0528: Generates both `<think>` start and `</think>` end tags\n",
"- Both are handled by the same `deepseek-r1` parser\n", "- Both are handled by the same `deepseek-r1` parser\n",
"\n", "\n",
"**DeepSeek-V3 Family:**\n",
"- DeepSeek-V3.1: Hybrid model supporting both thinking and non-thinking modes, use the `deepseek-v3` parser and `thinking` parameter (NOTE: not `enable_thinking`)\n",
"\n",
"**Qwen3 Family:**\n", "**Qwen3 Family:**\n",
"- Standard Qwen3 (e.g., Qwen3-2507): Use `qwen3` parser, supports `enable_thinking` in chat templates\n", "- Standard Qwen3 (e.g., Qwen3-2507): Use `qwen3` parser, supports `enable_thinking` in chat templates\n",
"- Qwen3-Thinking (e.g., Qwen3-235B-A22B-Thinking-2507): Use `qwen3` or `qwen3-thinking` parser, always thinks\n", "- Qwen3-Thinking (e.g., Qwen3-235B-A22B-Thinking-2507): Use `qwen3` or `qwen3-thinking` parser, always thinks\n",
...@@ -354,92 +358,6 @@ ...@@ -354,92 +358,6 @@
"\n", "\n",
"For future reasoning models, you can implement the reasoning parser as a subclass of `BaseReasoningFormatDetector` in `python/sglang/srt/reasoning_parser.py` and specify the reasoning parser for new reasoning model schemas accordingly." "For future reasoning models, you can implement the reasoning parser as a subclass of `BaseReasoningFormatDetector` in `python/sglang/srt/reasoning_parser.py` and specify the reasoning parser for new reasoning model schemas accordingly."
] ]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"```python\n",
"class DeepSeekR1Detector(BaseReasoningFormatDetector):\n",
" \"\"\"\n",
" Detector for DeepSeek-R1 family models.\n",
" \n",
" Supported models:\n",
" - DeepSeek-R1: Always generates thinking content without <think> start tag\n",
" - DeepSeek-R1-0528: Generates thinking content with <think> start tag\n",
" \n",
" This detector handles both patterns automatically.\n",
" \"\"\"\n",
"\n",
" def __init__(self, stream_reasoning: bool = True):\n",
" super().__init__(\"<think>\", \"</think>\", force_reasoning=True, stream_reasoning=stream_reasoning)\n",
"\n",
"\n",
"class Qwen3Detector(BaseReasoningFormatDetector):\n",
" \"\"\"\n",
" Detector for standard Qwen3 models that support enable_thinking parameter.\n",
" \n",
" These models can switch between thinking and non-thinking modes:\n",
" - enable_thinking=True: Generates <think>...</think> tags\n",
" - enable_thinking=False: No thinking content generated\n",
" \"\"\"\n",
"\n",
" def __init__(self, stream_reasoning: bool = True):\n",
" super().__init__(\"<think>\", \"</think>\", force_reasoning=False, stream_reasoning=stream_reasoning)\n",
"\n",
"\n",
"class Qwen3ThinkingDetector(BaseReasoningFormatDetector):\n",
" \"\"\"\n",
" Detector for Qwen3-Thinking models (e.g., Qwen3-235B-A22B-Thinking-2507).\n",
" \n",
" These models always generate thinking content without <think> start tag.\n",
" They do not support the enable_thinking parameter.\n",
" \"\"\"\n",
"\n",
" def __init__(self, stream_reasoning: bool = True):\n",
" super().__init__(\"<think>\", \"</think>\", force_reasoning=True, stream_reasoning=stream_reasoning)\n",
"\n",
"\n",
"class ReasoningParser:\n",
" \"\"\"\n",
" Parser that handles both streaming and non-streaming scenarios.\n",
" \n",
" Usage:\n",
" # For standard Qwen3 models with enable_thinking support\n",
" parser = ReasoningParser(\"qwen3\")\n",
" \n",
" # For Qwen3-Thinking models that always think\n",
" parser = ReasoningParser(\"qwen3-thinking\")\n",
" \"\"\"\n",
"\n",
" DetectorMap: Dict[str, Type[BaseReasoningFormatDetector]] = {\n",
" \"deepseek-r1\": DeepSeekR1Detector,\n",
" \"qwen3\": Qwen3Detector,\n",
" \"qwen3-thinking\": Qwen3ThinkingDetector,\n",
" \"kimi\": KimiDetector,\n",
" }\n",
"\n",
" def __init__(self, model_type: str = None, stream_reasoning: bool = True):\n",
" if not model_type:\n",
" raise ValueError(\"Model type must be specified\")\n",
"\n",
" detector_class = self.DetectorMap.get(model_type.lower())\n",
" if not detector_class:\n",
" raise ValueError(f\"Unsupported model type: {model_type}\")\n",
"\n",
" self.detector = detector_class(stream_reasoning=stream_reasoning)\n",
"\n",
" def parse_non_stream(self, full_text: str) -> Tuple[str, str]:\n",
" \"\"\"Returns (reasoning_text, normal_text)\"\"\"\n",
" ret = self.detector.detect_and_parse(full_text)\n",
" return ret.reasoning_text, ret.normal_text\n",
"\n",
" def parse_stream_chunk(self, chunk_text: str) -> Tuple[str, str]:\n",
" \"\"\"Returns (reasoning_text, normal_text) for the current chunk\"\"\"\n",
" ret = self.detector.parse_streaming_increment(chunk_text)\n",
" return ret.reasoning_text, ret.normal_text\n",
"```"
]
} }
], ],
"metadata": { "metadata": {
......
...@@ -167,9 +167,9 @@ python3 -m sglang.launch_server --model-path deepseek-ai/DeepSeek-V3-0324 --spec ...@@ -167,9 +167,9 @@ python3 -m sglang.launch_server --model-path deepseek-ai/DeepSeek-V3-0324 --spec
- Set `--cuda-graph-bs`. It's a list of batch sizes for cuda graph capture. The default captured batch sizes for speculative decoding is set [here](https://github.com/sgl-project/sglang/blob/49420741746c8f3e80e0eb17e7d012bfaf25793a/python/sglang/srt/model_executor/cuda_graph_runner.py#L126). You can include more batch sizes into it. - Set `--cuda-graph-bs`. It's a list of batch sizes for cuda graph capture. The default captured batch sizes for speculative decoding is set [here](https://github.com/sgl-project/sglang/blob/49420741746c8f3e80e0eb17e7d012bfaf25793a/python/sglang/srt/model_executor/cuda_graph_runner.py#L126). You can include more batch sizes into it.
### Reasoning Content for DeepSeek R1 ### Reasoning Content for DeepSeek R1 & V3.1
See [Separate Reasoning](https://docs.sglang.ai/backend/separate_reasoning.html). See [Reasoning Parser](https://docs.sglang.ai/advanced_features/separate_reasoning.html) and [Thinking Parameter for DeepSeek V3.1](https://docs.sglang.ai/basic_usage/openai_api_completions.html#Example:-DeepSeek-V3-Models).
### Function calling for DeepSeek Models ### Function calling for DeepSeek Models
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment