English | [įŽäŊ䏿](README_zh-CN.md)
đ join us on Twitter, Discord and WeChat
______________________________________________________________________ ## News đ - \[2023/07\] TurboMind supports Llama-2 70B with GQA. - \[2023/07\] TurboMind supports Llama-2 7B/13B. - \[2023/07\] TurboMind supports tensor-parallel inference of InternLM. ______________________________________________________________________ ## Introduction LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the [MMRazor](https://github.com/open-mmlab/mmrazor) and [MMDeploy](https://github.com/open-mmlab/mmdeploy) teams. It has the following core features: - **Efficient Inference Engine (TurboMind)**: Based on [FasterTransformer](https://github.com/NVIDIA/FasterTransformer), we have implemented an efficient inference engine - TurboMind, which supports the inference of LLaMA and its variant models on NVIDIA GPUs. - **Interactive Inference Mode**: By caching the k/v of attention during multi-round dialogue processes, it remembers dialogue history, thus avoiding repetitive processing of historical sessions. - **Multi-GPU Model Deployment and Quantization**: We provide comprehensive model deployment and quantification support, and have been validated at different scales. - **Persistent Batch Inference**: Further optimization of model execution efficiency.  ## Performance **Case I**: output token throughput with fixed input token and output token number (1, 2048) **Case II**: request throughput with real conversation data Test Setting: LLaMA-7B, NVIDIA A100(80G) The output token throughput of TurboMind exceeds 2000 tokens/s, which is about 5% - 15% higher than DeepSpeed overall and outperforms huggingface transformers by up to 2.3x. And the request throughput of TurboMind is 30% higher than vLLM.  ## Quick Start ### Installation Install lmdeploy with pip ( python 3.8+) or [from source](./docs/en/build.md) ```shell pip install lmdeploy ``` ### Deploy InternLM #### Get InternLM model ```shell # 1. Download InternLM model # Make sure you have git-lfs installed (https://git-lfs.com) git lfs install git clone https://huggingface.co/internlm/internlm-chat-7b /path/to/internlm-chat-7b # if you want to clone without large files â just their pointers # prepend your git clone with the following env var: GIT_LFS_SKIP_SMUDGE=1 # 2. Convert InternLM model to turbomind's format, which will be in "./workspace" by default python3 -m lmdeploy.serve.turbomind.deploy internlm-chat-7b /path/to/internlm-chat-7b ``` #### Inference by TurboMind ```shell python -m lmdeploy.turbomind.chat ./workspace ``` > **Note**