SGLang provides robust support for embedding models by integrating efficient serving mechanisms with its flexible programming interface. This integration allows for streamlined handling of embedding tasks, facilitating faster and more accurate retrieval and semantic search operations. SGLang's architecture enables better resource utilization and reduced latency in embedding model deployment.
SGLang provides robust support for embedding models by integrating efficient serving mechanisms with its flexible programming interface. This integration allows for streamlined handling of embedding tasks, facilitating faster and more accurate retrieval and semantic search operations. SGLang's architecture enables better resource utilization and reduced latency in embedding model deployment.
```{important}
```{important}
They are executed with `--is-embedding` and some may require `--trust-remote-code`
Embedding models are executed with `--is-embedding` flag and some may require `--trust-remote-code`
| **GTE-Qwen2** | `Alibaba-NLP/gte-Qwen2-7B-instruct` | N/A | Alibaba's text embedding model with multilingual support |
| **Llama/Mistral based (E5EmbeddingModel)** | `intfloat/e5-mistral-7b-instruct` | N/A | Mistral/Llama-based embedding model fine‑tuned for high‑quality text embeddings (top‑ranked on the MTEB benchmark). |
| **Qwen3-Embedding** | `Qwen/Qwen3-Embedding-4B` | N/A | Latest Qwen3-based text embedding model for semantic representation |
| **GTE (QwenEmbeddingModel)** | `Alibaba-NLP/gte-Qwen2-7B-instruct` | N/A | Alibaba’s general text embedding model (7B), achieving state‑of‑the‑art multilingual performance in English and Chinese. |
| **GME (MultimodalEmbedModel)** | `Alibaba-NLP/gme-Qwen2-VL-2B-Instruct` | `gme-qwen2-vl` | Multimodal embedding model (2B) based on Qwen2‑VL, encoding image + text into a unified vector space for cross‑modal retrieval. |
| **GME (Multimodal)** | `Alibaba-NLP/gme-Qwen2-VL-2B-Instruct`| `gme-qwen2-vl`| Multimodal embedding for text and image cross-modal tasks |
| **CLIP (CLIPEmbeddingModel)** | `openai/clip-vit-large-patch14-336` | N/A | OpenAI’s CLIP model (ViT‑L/14) for embedding images (and text) into a joint latent space; widely used for image similarity search. |
| **CLIP** | `openai/clip-vit-large-patch14-336` | N/A | OpenAI's CLIP for image and text embeddings |
| **BGE (BgeEmbeddingModel)** | `BAAI/bge-large-en-v1.5` | N/A | Currently only support `attention-backend``triton` and `torch_native`. BAAI's BGE embedding models optimized for retrieval and reranking tasks. |