"Some models support internal reasoning or thinking processes that can be exposed in the API response. SGLang provides unified support for various reasoning models through the `chat_template_kwargs` parameter and compatible reasoning parsers.\n",
"\n",
"#### Supported Models and Configuration\n",
"\n",
"| Model Family | Chat Template Parameter | Reasoning Parser | Notes |\n",
"1. Launch the server with the appropriate reasoning parser\n",
"2. Set the model-specific parameter in `chat_template_kwargs`\n",
"3. Optionally use `separate_reasoning: False` to not get reasoning content separately (default to `True`)\n",
"\n",
"**Note for Qwen3-Thinking models:** These models always generate thinking content and do not support the `enable_thinking` parameter. Use `--reasoning-parser qwen3-thinking` or `--reasoning-parser qwen3` to parse the thinking content.\n"
"Reasoning: Okay, so I need to figure out which number is greater between 9.11 and 9.8...\n",
"Answer: 9.8 is greater than 9.11.\n",
"```\n",
"\n",
"**Note:** Setting `\"enable_thinking\": False` (or omitting it) will result in `reasoning_content` being `None`. Qwen3-Thinking models always generate reasoning content and don't support the `enable_thinking` parameter.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Example: DeepSeek-V3 Models\n",
"\n",
"DeepSeek-V3 models support thinking mode through the `thinking` parameter:\n",
"You can use `chat_template_kwargs` to enable or disable the model's internal thinking or reasoning process output. Set `\"enable_thinking\": True` within `chat_template_kwargs` to include the reasoning steps in the response. This requires launching the server with a compatible reasoning parser.\n",
"\n",
"**Reasoning Parser Options:**\n",
"- `--reasoning-parser deepseek-r1`: For DeepSeek-R1 family models (R1, R1-0528, R1-Distill)\n",
"- `--reasoning-parser qwen3`: For both standard Qwen3 models that support `enable_thinking` parameter and Qwen3-Thinking models\n",
"- `--reasoning-parser qwen3-thinking`: For Qwen3-Thinking models, force reasoning version of qwen3 parser\n",
"- `--reasoning-parser kimi`: For Kimi thinking models\n",
"\n",
"Here's an example demonstrating how to enable thinking and retrieve the reasoning content separately (using `separate_reasoning: True`):\n",
"\n",
"```python\n",
"# For Qwen3 models with enable_thinking support:\n",
" Okay, so I need to figure out which number is greater between 9.11 and 9.8. Hmm, let me think. Both numbers start with 9, right? So the whole number part is the same. That means I need to look at the decimal parts to determine which one is bigger.\n",
"...\n",
"Therefore, after checking multiple methods—aligning decimals, subtracting, converting to fractions, and using a real-world analogy—it's clear that 9.8 is greater than 9.11.\n",
"\n",
"response.choices[0].message.content: \n",
" To determine which number is greater between **9.11** and **9.8**, follow these steps:\n",
"...\n",
"**Answer**: \n",
"9.8 is greater than 9.11.\n",
"```\n",
"\n",
"Setting `\"enable_thinking\": False` (or omitting it) will result in `reasoning_content` being `None`.\n",
"\n",
"**Note for Qwen3-Thinking models:** These models always generate thinking content and do not support the `enable_thinking` parameter. Use `--reasoning-parser qwen3-thinking` or `--reasoning-parser qwen3` to parse the thinking content.\n",
"\n",
"Here is an example of a detailed chat completion request using standard OpenAI parameters:"