Unverified Commit 77a34c28 authored by UnicornChan's avatar UnicornChan Committed by GitHub
Browse files

Merge pull request #36 from kvcache-ai/develop-0.1.2

Release v0.1.2
parents 44f57270 395cd3e7
This diff is collapsed.
name: Build Wheels
on:
workflow_dispatch:
inputs:
release:
description: 'Release? 1 = yes, 0 = no'
default: '0'
required: true
type: string
jobs:
build_wheels:
name: ${{ matrix.os }} Python=${{ matrix.pyver }} CUDA=${{ matrix.cuda }} CPU_INSTRUCT=${{ matrix.instruct }} Torch=${{ matrix.torch }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
include:
# Ubuntu
- { os: ubuntu-20.04, pyver: '3.12', cuda: '12.2.2', torch: '2.3.0', cudaarch: '8.9;9.0+PTX', instruct: 'FANCY', torch_cu: '121'}
- { os: windows-2022, pyver: '3.11', cuda: '12.5.1', torch: '2.4.0', cudaarch: '8.9;9.0+PTX', instruct: 'AVX2', torch_cu: '124'}
defaults:
run:
shell: pwsh
steps:
- uses: actions/checkout@v3
- name: Free Disk Space
uses: jlumbroso/free-disk-space@v1.3.1
if: runner.os == 'Linux'
with:
tool-cache: true
android: true
dotnet: true
haskell: true
large-packages: false
swap-storage: true
- uses: actions/setup-python@v4
with:
python-version: ${{ matrix.pyver }}
- name: check_space
run: |
if($IsLinux) {df -h}
if($IsWindows) {Get-PSDrive -PSProvider 'FileSystem'}
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Setup Mamba
if: matrix.cuda != ''
uses: conda-incubator/setup-miniconda@v2.3.0
with:
activate-environment: "ktransformers"
python-version: ${{ matrix.pyver }}
miniforge-variant: Mambaforge
miniforge-version: latest
use-mamba: true
add-pip-as-python-dependency: true
auto-activate-base: false
- name: build web
run: |
cd ktransformers/website/
npm install
npm run build
cd ../../
- name: build for cuda
if: matrix.cuda != ''
run: |
git submodule init
git submodule update
if($IsWindows){
$originalPath = Get-Location
Import-Module 'C:\Program Files\Microsoft Visual Studio\2022\Enterprise\Common7\Tools\Microsoft.VisualStudio.DevShell.dll'
Enter-VsDevShell -VsInstallPath 'C:\Program Files\Microsoft Visual Studio\2022\Enterprise' -DevCmdArguments '-arch=x64 -host_arch=x64'
$env:DISTUTILS_USE_SDK=1
Set-Location $originalPath
}
$cudaVersion = '${{ matrix.cuda }}'
$env:MAMBA_NO_LOW_SPEED_LIMIT = 1
mamba install -y -c nvidia/label/cuda-$cudaVersion cuda-toolkit cuda-runtime
$env:CUDA_PATH = $env:CONDA_PREFIX
$env:CUDA_HOME = $env:CONDA_PREFIX
if ($IsLinux) {
$env:LD_LIBRARY_PATH = $env:CONDA_PREFIX + '/lib:' + $env:LD_LIBRARY_PATH
$env:LD_LIBRARY_PATH = $env:CONDA_PREFIX + '/lib/python${{ matrix.pyver }}/site-packages/nvidia/nvjitlink/lib:' + $env:LD_LIBRARY_PATH
if (!(Test-Path $env:CUDA_HOME/lib64)) {
New-Item -ItemType SymbolicLink -Path $env:CUDA_HOME/lib64 -Target $env:CUDA_HOME/lib
}
}
if ($IsWindows) {
$env:CUDA_PATH = "$env:CUDA_PATH/Library"
$env:CUDA_HOME = $env:CUDA_PATH
$env:PATH = "$env:CUDA_PATH/bin;" + $env:PATH
cp $env:CUDA_PATH/lib/*.lib $env:CUDA_PATH/lib/x64/
$env:INCLUDE =$env:CUDA_PATH + "/include/targets/x64;" + $env:INCLUDE
}
python -m pip install torch==${{ matrix.torch }} torchvision torchaudio --index-url https://download.pytorch.org/whl/cu${{ matrix.torch_cu }}
python -m pip install cpufeature build wheel ninja packaging setuptools
$env:KTRANSFORMERS_FORCE_BUILD = "TRUE"
$env:CPU_INSTRUCT = '${{ matrix.instruct }}'
$env:TORCH_CUDA_ARCH_LIST = '${{ matrix.cudaarch }}'
python -m build --no-isolation --verbose
- name: create Rlease dir
run: |
if ($IsWindows) {
$env:date = $(Get-Date -Format "yyyy-MM-dd")
New-Item -ItemType Directory -Force -Path "$Env:USERPROFILE\.ssh"
$Env:SSH_PATH = "$Env:USERPROFILE\.ssh\id_rsa"
Set-Content -Path $Env:SSH_PATH -Value "${{ secrets.SSH_PRIVATE_KEY }}"
(Get-Content -Path $Env:SSH_PATH).Replace("`r`n","`n") | Set-Content -Path $Env:SSH_PATH
chmod 600 $Env:SSH_PATH
}
if ($IsLinux) {
$env:date = $(date +%Y-%m-%d)
mkdir -p ~/.ssh/
echo "${{ secrets.SSH_PRIVATE_KEY }}" > ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_rsa
}
ssh -p ${{ secrets.SSH_PORT }} -o StrictHostKeyChecking=no root@${{ secrets.SSH_SERVER }} "mkdir -p /mnt/data/release-$env:date"
scp -P ${{ secrets.SSH_PORT }} -o StrictHostKeyChecking=no dist/*.whl root@${{ secrets.SSH_SERVER }}:/mnt/data/release-$env:date/
\ No newline at end of file
...@@ -14,4 +14,7 @@ node_modules ...@@ -14,4 +14,7 @@ node_modules
.DS_Store .DS_Store
compile_commands.json compile_commands.json
*.egg-info* *.egg-info*
*dist/ *dist/
\ No newline at end of file ktransformers/server/local_store/
ktransformers/server_test1.db
*.patch
\ No newline at end of file
...@@ -268,7 +268,10 @@ In this example, the AutoModel is first initialized on the meta device to avoid ...@@ -268,7 +268,10 @@ In this example, the AutoModel is first initialized on the meta device to avoid
After injection, the original `generate` interface is available, but we also provide a compatible `prefill_and_generate` method, which enables further optimizations like CUDAGraph to improve generation speed. After injection, the original `generate` interface is available, but we also provide a compatible `prefill_and_generate` method, which enables further optimizations like CUDAGraph to improve generation speed.
<h3>YAML Template</h3> <h3>How to custom your model</h3>
A detailed tutorial of the injection and multi-GPU using DeepSeek-V2 as an example is given [here](doc/en/injection_tutorial.md).
Below is an example of a YAML template for replacing all original Linear modules with Marlin, an advanced 4-bit quantization kernel. Below is an example of a YAML template for replacing all original Linear modules with Marlin, an advanced 4-bit quantization kernel.
```yaml ```yaml
...@@ -287,7 +290,7 @@ Each rule in the YAML file has two parts: `match` and `replace`. The `match` par ...@@ -287,7 +290,7 @@ Each rule in the YAML file has two parts: `match` and `replace`. The `match` par
You can find example rule templates for optimizing DeepSeek-V2 and Qwen2-57B-A14, two SOTA MoE models, in the [ktransformers/optimize/optimize_rules](ktransformers/optimize/optimize_rules) directory. These templates are used to power the `local_chat.py` demo. You can find example rule templates for optimizing DeepSeek-V2 and Qwen2-57B-A14, two SOTA MoE models, in the [ktransformers/optimize/optimize_rules](ktransformers/optimize/optimize_rules) directory. These templates are used to power the `local_chat.py` demo.
A detailed description of the injection using DeepSeek-V2 as an example is given [here](doc/en/deepseek-v2-injection.md). If you are interested in our design principles and the implementation of the injection framework, please refer to the [design document](doc/en/deepseek-v2-injection.md).
<h2 id="ack">Acknowledgment and Contributors</h2> <h2 id="ack">Acknowledgment and Contributors</h2>
......
...@@ -90,7 +90,7 @@ The YAML rule is listed below. ...@@ -90,7 +90,7 @@ The YAML rule is listed below.
- match: - match:
name: "^model\\.layers\\..*\\.self_attn$" # regular expression name: "^model\\.layers\\..*\\.self_attn$" # regular expression
replace: replace:
class: ktransformers.operators.attention.DeepseekV2AttentionInjected # optimized MLA implementation class: ktransformers.operators.attention.KDeepseekV2Attention # optimized MLA implementation
``` ```
As we can see, each rule in the YAML file has two parts: `match` and `replace`. As we can see, each rule in the YAML file has two parts: `match` and `replace`.
...@@ -98,9 +98,9 @@ The match part specifies which module should be replaced, and the replace part s ...@@ -98,9 +98,9 @@ The match part specifies which module should be replaced, and the replace part s
<h3 id="experts">Routed Experts </h3> <h3 id="experts">Routed Experts </h3>
For routed experts, the module we inject is a wrapper of CPUInfer, KTransformersMLPExpert. There are several implementations within a wrapper, and we need to specify keywords to tell the wrapper which implementation we want to use and how we intend to use it. For routed experts, the module we inject is a wrapper of CPUInfer, KTransformersExperts. There are several implementations within a wrapper, and we need to specify keywords to tell the wrapper which implementation we want to use and how we intend to use it.
In KTransformers, some models exhibit different behaviors during prefilling and generation for better performance. KTransformersMLPExpert is one of them. All these special modules have a `device` keyword describing which device the module should be initialized on. Other keywords specify the behaviors during prefilling and generation and may be differ when using different injection modules. Here, we specify which implementation on which device we want to use during prefilling and generation, and which device the output should be on. In KTransformers, some models exhibit different behaviors during prefilling and generation for better performance. KTransformersExperts is one of them. All these special modules have a `device` keyword describing which device the module should be initialized on. Other keywords specify the behaviors during prefilling and generation and may be differ when using different injection modules. Here, we specify which implementation on which device we want to use during prefilling and generation, and which device the output should be on.
Note that we only use these parameters when layer-wise prefilling is enabled; otherwise, prefilling is conducted with the same configuration as generation. Note that we only use these parameters when layer-wise prefilling is enabled; otherwise, prefilling is conducted with the same configuration as generation.
In the original implementation of Transformers, MoE is implemented using `nn.ModuleList`. We don't want KTransformers to iterate through all the sub-modules in the list, so we set `recursive: False` in this rule to prevent recursive injection into submodules of the current module. Here is the YAML rule: In the original implementation of Transformers, MoE is implemented using `nn.ModuleList`. We don't want KTransformers to iterate through all the sub-modules in the list, so we set `recursive: False` in this rule to prevent recursive injection into submodules of the current module. Here is the YAML rule:
...@@ -109,13 +109,13 @@ In the original implementation of Transformers, MoE is implemented using `nn.Mod ...@@ -109,13 +109,13 @@ In the original implementation of Transformers, MoE is implemented using `nn.Mod
- match: - match:
name: "^model\\.layers\\..*\\.mlp\\.experts$" name: "^model\\.layers\\..*\\.mlp\\.experts$"
replace: replace:
class: ktransformers.operators.experts.KTransformersMLPExpert # custom MoE Kernel with expert parallelism class: ktransformers.operators.experts.KTransformersExperts # custom MoE Kernel with expert parallelism
device: "cpu" # device to load this module on initialization device: "cpu" # device to load this module on initialization
kwargs: kwargs:
prefill_device: "cuda" prefill_device: "cuda"
prefill_mlp_type: "MLPExpertsTorch" prefill_op: "KExpertsTorch"
generate_device: "cpu" generate_device: "cpu"
generate_mlp_type: "MLPCPUExperts" generate_op: "KExpertsCPU"
out_device: "cuda" out_device: "cuda"
recursive: False # don't recursively inject submodules of this module recursive: False # don't recursively inject submodules of this module
``` ```
...@@ -126,7 +126,7 @@ If we inject the expert list as a custom module, we can't use the interface in ` ...@@ -126,7 +126,7 @@ If we inject the expert list as a custom module, we can't use the interface in `
- match: - match:
class: ktransformers.models.modeling_deepseek.DeepseekV2MoE class: ktransformers.models.modeling_deepseek.DeepseekV2MoE
replace: replace:
class: ktransformers.operators.experts.DeepseekV2MoEInjected # MLP module with custom forward function class: ktransformers.operators.experts.KDeepseekV2MoE # MLP module with custom forward function
``` ```
<h3 id="linear">Other Linear Modules</h3> <h3 id="linear">Other Linear Modules</h3>
...@@ -140,12 +140,12 @@ We also need to transfer some keywords similar to the injection of experts. Here ...@@ -140,12 +140,12 @@ We also need to transfer some keywords similar to the injection of experts. Here
name: "^model\\.layers\\.(?!.*self_attn).*$" # regular expression name: "^model\\.layers\\.(?!.*self_attn).*$" # regular expression
class: torch.nn.Linear # only match modules matching name and class simultaneously class: torch.nn.Linear # only match modules matching name and class simultaneously
replace: replace:
class: ktransformers.operators.linear.KTransformerLinear # optimized Kernel on quantized data types class: ktransformers.operators.linear.KTransformersLinear # optimized Kernel on quantized data types
kwargs: kwargs:
generate_device: "cuda" generate_device: "cuda"
prefill_device: "cuda" prefill_device: "cuda"
generate_op: "QuantizedLinearMarlin" generate_op: "KLinearMarlin"
prefill_op: "QuantizedLinearTorch" prefill_op: "KLinearTorch"
``` ```
<h3 id="Pre-compute Buffers">Pre-compute Buffers </h3> <h3 id="Pre-compute Buffers">Pre-compute Buffers </h3>
......
# Tutorial: Inject Operator Step by Step
> Author: Azure-Tang
## TL;DR
This tutorial will guide you through the process of injecting custom operators into a model using the KTransformers framework. We will use the DeepSeekV2-Chat model as an example to demonstrate how to inject custom operators into the model step by step. The tutorial will cover the following topics:
* [How to write injection rules](#how-to-write-injection-rules)
* [Understanding the structure of the model](#understanding-model-structure)
* [Multi-GPU](#muti-gpu)
* [How to write a new operator and inject it into the model](#how-to-write-a-new-operator-and-inject-into-the-model)
## How to Write Injection Rules
The basic form of the injection rules for the Inject framework is as follows:
```yaml
- match:
name: "^model\\.layers\\..*\\.*$" # Target module name
class: torch.nn.Linear # Target module
replace:
class: "default"
kwargs:
generate_device: "cuda:0"
# your_op_param_1: 1234
# your_op_param_2: 5678
recursive: True
```
* match: This field marks the matching rules, which can appear in two forms, name and class. These two matching rules can appear together or separately; they only match when both criteria are met.
* replace:
* class: Python class that can be imported to replace the target module. If no replacement is desired, set to default.
* kwargs: List of parameters needed for module initialization.
* generate_device: The device for this module, can be set to “cpu”, “cuda”, “cuda:1”, etc.
* recursive: Whether to recursively inject this module’s submodules, default is True.
For the recursive field: Some modules contain multiple submodules, such as the Self-attention module typically includes q/k/v/o four linear modules. If we replace the self-attention module but do not want the internal linear modules to be covered by other rules, set this rule to False.
## Understanding Model Structure
Using [deepseek-ai/DeepSeek-V2-Lite-Chat](https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite-Chat) as an example, we can follow the above rules step by step to inject our custom module and run it. KTransformers offers a high degree of flexibility, allowing you to replace/experiment with basic operators. However, it also requires users to clearly understand the structure of the model they are running.
Fortunately, knowing the structure of a model is very simple. Open the file list on the [deepseek-ai/DeepSeek-V2-Lite](https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite-Chat/tree/main) homepage, and you can see the following files:
<p align="center">
<picture>
<img alt="Inject-Struction" src="../assets/model_structure_guild.png" width=60%>
</picture>
</p>
From the `.saftensors` file, we can see the name of each layer’s weights, corresponding to the match.name attribute in the injection rules.
From the `modeling_deepseek.py` file, we can see the specific implementation of each module class, with the class name corresponding to the match.class attribute in the injection rules.
The structure of the DeepSeekV2 model from the `.saftensors` and `modeling_deepseek.py` files is as follows:
<p align="center">
<picture>
<img alt="Inject-Struction" src="../assets/deepseekv2_structure.png" width=60%>
</picture>
</p>
Supported operators and their corresponding classes are as follows:
| match | replace | backends | descriptions |
| --------- | ---------------------- | ----------------------- | -------------------- |
| Linear | KTransformersLinear | KLinearMarlin | Marlin as backend |
| | | KLinearTorch | pytorch as backend |
| | | KLinearCPUInfer | llamafile as backend |
| experts | KTransformersExperts | KExpertsTorch | pytorch as backend |
| | | KExpertsMarlin | Marlin as backend |
| | | KExpertsCPU | llamafile as backend |
| Attention | KDeepseekV2Attention | KDeepseekV2Attention | MLA implementation |
| MoE | KMistralSparseMoEBlock | KQwen2MoeSparseMoeBlock | MoE for Qwen2 |
| | KDeepseekV2MoE | KDeepseekV2MoE | MoE for DeepseekV2 |
| Model | KQwen2MoeModel | KQwen2MoeModel | Model for Qwen2 |
| | KDeepseekV2Model | KDeepseekV2Model | Model for DeepseekV2 |
| RoPE | RotaryEmbedding | RotaryEmbedding | RoPE module |
| | YarnRotaryEmbedding | YarnRotaryEmbedding | RoPE module |
Then we start step-by-step injection of custom modules, our targets are:
* Replace the linear module with custom Marlin linear module.
* Replace the self-attention module with a custom Absorption-based MLA module.
* Replace the experts module with a custom Experts module.
* Replace the MoE module with a custom MoE module.
* Replace the RoPE module with a custom RoPE module.
* Set the running device for each module.
The full implementation of the injection rules can be found in the [here](https://github.com/kvcache-ai/ktransformers/blob/main/ktransformers/optimize/optimize_rules/DeepSeek-V2-Chat.yaml).
## Matrix Absorption-based MLA Injection
For the injection of the Attention module, we only need to use a regular expression to match the module names used in transformers and replace them with our own MLA module implementation. The YAML injection rule is as follows:
```yaml
- match:
name: "^model\\.layers\\..*\\.self_attn$" # Regular expression
replace:
class: ktransformers.operators.attention.KDeepseekV2Attention # Optimized MLA implementation
```
As you can see, each rule in the YAML file has two parts: match and replace. The match part specifies the module to be replaced, and the replace part specifies the module to be injected into the model along with the initialization keywords.
## Injection of Routed Experts
For Routed Experts (corresponding to the exps in the diagram), the module we inject is CPUInfer, which is wrapped in the wrapper module KTransformersExperts. KTransformersExperts has multiple implementations, and we need to specify keywords to tell the wrapper module which implementation we want to use and how we plan to use it.
In the source code of the transformer, MoE is implemented using nn.ModuleList. We do not want KTransformers to traverse all submodules in the list and inject them one by one, so in this rule, we set recursive: False to prevent recursive injection into the submodules of this module. The YAML rule is as follows:
```yaml
- match:
name: "^model\\.layers\\..*\\.mlp\\.experts$"
replace:
class: ktransformers.operators.experts.KTransformersExperts # Custom MoE kernel with expert parallelism
kwargs:
generate_device: "cpu"
generate_op: "MLPCPUExperts"
out_device: "cuda"
recursive: False # Don't recursively inject submodules of this module
```
If we inject Routed Experts as a custom module, we cannot use the interfaces in the original `nn.ModuleList`. Therefore, it is necessary to modify the forward function in the FFN module. The simplest method is to implement a new module with a custom forward function and inject it.
```yaml
- match:
class: ktransformers.models.modeling_deepseek.DeepseekV2MoE
replace:
class: ktransformers.operators.experts.KDeepseekV2MoE # MLP module with custom forward function
```
## Injection of Linear Layers
For the remaining linear layer modules, we aim to use quantized operators to save storage space while improving performance. Since there is no current research on using MLA and quantization together, we do not want to inject linear into the MLA operator. Therefore, we can modify the regular expression and add a type check in the match part of the rule. Only modules that match both the name and class simultaneously will be injected. We also need to pass some keywords similar to the injection of Routed Experts. The YAML rule is as follows:
```yaml
- match:
name: "^model\\.layers\\.(?!.*self_attn).*$" # Regular expression
class: torch.nn.Linear # Only match modules matching name and class simultaneously
replace:
class: ktransformers.operators.linear.KTransformersLinear # Optimized kernel on quantized data types
kwargs:
generate_device: "cuda"
generate_op: "QuantizedLinearMarlin"
```
## Injection of Modules with Pre-calculated Buffers
To avoid occupying resources when initializing the injected original model, we use torch’s meta device to initialize the original model. The RoPE module pre-calculates some buffers during initialization, but no calculations are performed when using the meta device. Therefore, we need to compensate for the calculation of the buffer when loading the model. Simply, we inject a custom module into the rotary embedding module, which performs pre-calculation during loading. The YAML rule is as follows:
```yaml
- match:
class: ktransformers.models.modeling_deepseek.DeepseekV2YarnRotaryEmbedding
replace:
class: ktransformers.operators.RoPE.YarnRotaryEmbedding
```
## Specifying Running Devices for Modules
Finally, we set a fallback basic attribute generate_device for all modules:
```yaml
- match:
name: "^model\\.layers\\..*\\.|^lm_head"
replace:
class: "default"
kwargs:
generate_device: "cuda"
- match:
name: "^model.embed_tokens"
replace:
class: "default"
kwargs:
generate_device: "cpu"
```
Through these two rules, we place all previously unmatched layers (and their submodules) and lm_head on cuda, and the embedding on cpu. Note that the properties of a module will be determined by the first rule it matches. For example, if you later set a new replace.kwargs.generate_device in an injected module, the device set earlier will take precedence. If your computer has multiple cards, you can also configure the model to multiple cards.
## Muti-GPU
If you have multiple GPUs, you can set the device for each module to different GPUs.
DeepseekV2-Chat got 60 layers, if we got 2 GPUs, we can allocate 30 layers to each GPU. Complete multi GPU rule examples [here](ktransformers/optimize/optimize_rules).
<p align="center">
<picture>
<img alt="Inject-Struction" src="../assets/multi_gpu.png" width=60%>
</picture>
</p>
First of all, for multi-GPU, we have to inject an new operator `KDeepseekV2Model`. And set division of the layers to different GPUs. For our case, we have to set the `transfer_map` in the `KDeepseekV2Model` operatoras as follows:
```yaml
- match:
name: "^model$"
replace:
class: "ktransformers.operators.models.KDeepseekV2Model"
kwargs:
transfer_map:
30: "cuda:1"
```
And we have to set the device for each module in the model.
For example, for `routed experts`, the yaml for one GPU is:
```yaml
- match:
name: "^model\\.layers\\..*\\.mlp\\.experts$"
replace:
class: ktransformers.operators.experts.KTransformersExperts # Custom MoE kernel with expert parallelism
kwargs:
generate_device: "cuda:0"
generate_op: "MLPCUDAExperts"
out_device: "cuda:0"
recursive: False # Don't recursively inject submodules of this module
```
But for two GPUs, we need to set the device for each module in the model.
```yaml
# allcate 0-29 layers‘s out_device to cuda:0
- match:
name: "^model\\.layers\\.(0|[1-9]|[12][0-9])\\.mlp\\.experts$"
replace:
class: ktransformers.operators.experts.KTransformersExperts # custom MoE Kernel with expert paralleism
kwargs:
generate_device: "cpu"
generate_op: "KExpertsCPU"
out_device: "cuda:0"
recursive: False # don't recursively inject submodules of this module
# allocate 30-59 layers‘s out_device to cuda:1
- match:
name: "^model\\.layers\\.([345][0-9])\\.mlp\\.experts$"
replace:
class: ktransformers.operators.experts.KTransformersExperts # custom MoE Kernel with expert paralleism
kwargs:
generate_device: "cpu"
generate_op: "KExpertsCPU"
out_device: "cuda:1"
recursive: False # don't recursively inject submodules of this module
```
For other modules, we can set the device in the same way.
## How to Write a New Operator and Inject into the Model
In this section, we will explain how to write an operator that can be injected, using the implementation of a new linear as an example.
First, all injectable operators need to inherit from the BaseInjectedModule class, which inherits some attributes required by our injection framework. Its initialization function needs to meet the following basic format:
```python
class LinearTorchInject(BaseInjectedModule):
def __init__(
self,
key: str,
gguf_loader: GGUFLoader,
config: PretrainedConfig,
orig_module: nn.Module = None,
generate_device: str = "cuda",
**kwargs,
):
super().__init__(key, gguf_loader, config, orig_module, generate_device, **kwargs)
```
If users have other parameters that need to be passed to this class, they can also be included in the init function and re-passed in the kwargs parameter in the yaml file. For example, if our operator wants to pass a parameter `my_param`, the init function can be written as:
```python
class LinearTorchInject(BaseInjectedModule):
def __init__(
self,
key: str,
gguf_loader: GGUFLoader,
config: PretrainedConfig,
orig_module: nn.Module = None,
generate_device: str = "cuda",
my_param: bool = True,
**kwargs,
):
super().__init__(key, gguf_loader, config, orig_module, generate_device, **kwargs)
self.my_param = my_param
```
Then our injection rule can be written as:
```yaml
- match:
name: "^model\\.layers\\..*$" # Regular expression matches the module name.
class: torch.nn.Linear # Type restrictions can be added.
replace:
class: ktransformers.operators.linear.LinearTorchInject # Inject module path
kwargs: # Extra parameters
generate_device: "cuda"
my_param: True
```
For the linear module, it is also necessary to read weights from a gguf file. We provide the `KLinearBase` class to help users read weights from gguf files. Users only need to inherit and implement the load, unload, and forward functions. Therefore, a fully injectable linear class would look like this:
```python
class LinearTorchInject(BaseInjectedModule, KLinearBase):
def __init__(
self,
key: str,
gguf_loader: GGUFLoader,
config: PretrainedConfig,
orig_module: nn.Module = None,
generate_device: str = "cuda",
**kwargs,
):
super().__init__(key, gguf_loader, config, orig_module, generate_device, **kwargs)
KLinearBase.__init__(self)
self.has_bias = False
self.dtype = torch.get_default_dtype()
self.w = None
self.has_bias = False
def load(self, w: dict | nn.Parameter | tuple | None = None, device: str|None = None):
if device is None: device = self.device
if w is None: w = self.load_weight(device=device)
if isinstance(w, nn.Parameter):
self.w = w.to(dtype=self.dtype).view(self.out_features, self.in_features).T
self.has_bias = False
elif isinstance(w, tuple):
self.w = w[0].to(dtype=self.dtype).view(self.out_features, self.in_features).T
self.bias = w[1].to(dtype=self.dtype)
self.has_bias = True
else:
raise ValueError("Invalid weight type")
self.w = self.w.to(device)
if self.has_bias:
self.bias = self.bias.to(device)
def unload(self):
if self.w is not None:
self.w = None
if self.has_bias:
self.bias = None
def forward(self, x: torch.Tensor) -> torch.Tensor:
dtype = x.dtype
out_device = x.device
x = x.to(device=self.device, dtype=self.dtype)
x = x @ self.w
if self.has_bias:
x = x + self.bias
x = x.to(dtype=dtype, device=out_device)
return x
```
Note that the `self.load_weight` function is provided by the KLinearBase class to help users load weights from a gguf file into the module. The implementation details of KLinearBase can be found on [GITHUB](https://github.com/kvcache-ai/ktransformers/blob/44f57270c9514d79fab224186d90ccf61059331a/ktransformers/operators/linear.py#L31).
__version__ = "0.1.1" __version__ = "0.1.2"
\ No newline at end of file \ No newline at end of file
...@@ -22,14 +22,13 @@ option(LLAMA_AVX2 "llama: enable AVX2" ...@@ -22,14 +22,13 @@ option(LLAMA_AVX2 "llama: enable AVX2"
option(LLAMA_AVX512 "llama: enable AVX512" OFF) option(LLAMA_AVX512 "llama: enable AVX512" OFF)
option(LLAMA_AVX512_VBMI "llama: enable AVX512-VBMI" OFF) option(LLAMA_AVX512_VBMI "llama: enable AVX512-VBMI" OFF)
option(LLAMA_AVX512_VNNI "llama: enable AVX512-VNNI" OFF) option(LLAMA_AVX512_VNNI "llama: enable AVX512-VNNI" OFF)
option(LLAMA_AVX512_BF16 "llama: enable AVX512-BF16" OFF)
option(LLAMA_FMA "llama: enable FMA" OFF) option(LLAMA_FMA "llama: enable FMA" OFF)
# in MSVC F16C is implied with AVX2/AVX512 # in MSVC F16C is implied with AVX2/AVX512
if (NOT MSVC) if (NOT MSVC)
option(LLAMA_F16C "llama: enable F16C" OFF) option(LLAMA_F16C "llama: enable F16C" OFF)
endif() endif()
option(LLAMA_AVX512_FANCY_SIMD "llama: enable AVX512-VL, AVX512-BW, AVX512-DQ, AVX512-VNNI" OFF) option(LLAMA_AVX512_FANCY_SIMD "llama: enable AVX512-VL, AVX512-BW, AVX512-DQ, AVX512-VNNI" OFF)
option(LLAMA_AVX512_BF16 "llama: enable AVX512-BF16" OFF)
# Architecture specific # Architecture specific
# TODO: probably these flags need to be tweaked on some architectures # TODO: probably these flags need to be tweaked on some architectures
......
...@@ -6,7 +6,7 @@ Author : chenht2022 ...@@ -6,7 +6,7 @@ Author : chenht2022
Date : 2024-07-25 10:31:59 Date : 2024-07-25 10:31:59
Version : 1.0.0 Version : 1.0.0
LastEditors : chenht2022 LastEditors : chenht2022
LastEditTime : 2024-07-25 10:32:51 LastEditTime : 2024-08-06 10:35:35
Copyright (c) 2024 by KVCache.AI, All Rights Reserved. Copyright (c) 2024 by KVCache.AI, All Rights Reserved.
''' '''
import os, sys import os, sys
...@@ -15,15 +15,18 @@ sys.path.append(os.path.dirname(__file__) + '/../build') ...@@ -15,15 +15,18 @@ sys.path.append(os.path.dirname(__file__) + '/../build')
import cpuinfer_ext import cpuinfer_ext
import torch import torch
input_size = 16384
output_size = 5120
stride = 16
group_max_len = 1024
layer_num = 10
qlen = 1
CPUInfer = cpuinfer_ext.CPUInfer(64)
warm_up_iter = 1000
test_iter = 10000
def bench_linear(quant_mode: str): def bench_linear(quant_mode: str):
with torch.inference_mode(mode=True): with torch.inference_mode(mode=True):
input_size = 16384
output_size = 5120
stride = 16
layer_num = 10
CPUInfer = cpuinfer_ext.CPUInfer(64)
warm_up_iter = 1000
test_iter = 10000
hidden_type = 30 # ggml_type::GGML_TYPE_BF16 hidden_type = 30 # ggml_type::GGML_TYPE_BF16
if quant_mode == "fp32": if quant_mode == "fp32":
...@@ -66,30 +69,37 @@ def bench_linear(quant_mode: str): ...@@ -66,30 +69,37 @@ def bench_linear(quant_mode: str):
projs = [] projs = []
for _ in range(layer_num): for _ in range(layer_num):
proj = torch.randn((output_size, input_size), dtype=torch.float32, device = "cuda").to("cpu").contiguous() proj = torch.randn((output_size, input_size), dtype=torch.float32, device = "cuda").to("cpu").contiguous()
config = cpuinfer_ext.linear.LinearConfig(input_size, output_size, stride, proj.data_ptr(), proj_type, hidden_type) config = cpuinfer_ext.linear.LinearConfig(input_size, output_size, stride, group_max_len, proj.data_ptr(), proj_type, hidden_type)
linear = cpuinfer_ext.linear.Linear(config) linear = cpuinfer_ext.linear.Linear(config)
projs.append(proj) projs.append(proj)
linears.append(linear) linears.append(linear)
input = torch.randn((layer_num, qlen, input_size), dtype=torch.bfloat16, device = "cuda").to("cpu").contiguous()
output = torch.empty((layer_num, qlen, output_size), dtype=torch.bfloat16, device = "cuda").to("cpu").contiguous()
# warm up # warm up
for i in range(warm_up_iter): for i in range(warm_up_iter):
linear = linears[i % layer_num] CPUInfer.submit(
input = torch.randn((1, input_size), dtype=torch.bfloat16).contiguous() linears[i % layer_num].forward(
output = torch.empty((1, output_size), dtype=torch.bfloat16).contiguous() qlen,
CPUInfer.submit(linear.forward, input.data_ptr(), output.data_ptr()) input[i % layer_num].data_ptr(),
output[i % layer_num].data_ptr()
)
)
CPUInfer.sync() CPUInfer.sync()
# test # test
total_time = 0 start = time.perf_counter()
for i in range(test_iter): for i in range(test_iter):
linear = linears[i % layer_num] CPUInfer.submit(
input = torch.randn((1, input_size), dtype=torch.bfloat16).contiguous() linears[i % layer_num].forward(
output = torch.empty((1, output_size), dtype=torch.bfloat16).contiguous() qlen,
start = time.perf_counter() input[i % layer_num].data_ptr(),
CPUInfer.submit(linear.forward, input.data_ptr(), output.data_ptr()) output[i % layer_num].data_ptr()
)
)
CPUInfer.sync() CPUInfer.sync()
end = time.perf_counter() end = time.perf_counter()
total_time += end - start total_time = end - start
print('Quant mode: ', quant_mode) print('Quant mode: ', quant_mode)
print('Time(s): ', total_time) print('Time(s): ', total_time)
print('Iteration: ', test_iter) print('Iteration: ', test_iter)
......
...@@ -14,14 +14,17 @@ import time ...@@ -14,14 +14,17 @@ import time
import torch import torch
import torch.nn.quantized as nnq import torch.nn.quantized as nnq
scale, zero_point = 0.1, 0 # Adjust scale and zero_point based on your dataset
input_size = 16384
output_size = 5120
layer_num = 10
qlen = 1
warm_up_iter = 1000
test_iter = 10000
def bench_linear(quant_mode: str): def bench_linear(quant_mode: str):
with torch.inference_mode(mode=True): with torch.inference_mode(mode=True):
input_size = 16384
output_size = 5120
layer_num = 10
warm_up_iter = 1000
test_iter = 10000
if quant_mode == "fp32": if quant_mode == "fp32":
proj_type = torch.float32 proj_type = torch.float32
bytes_per_elem = 4.000000 bytes_per_elem = 4.000000
...@@ -41,37 +44,32 @@ def bench_linear(quant_mode: str): ...@@ -41,37 +44,32 @@ def bench_linear(quant_mode: str):
for _ in range(layer_num): for _ in range(layer_num):
proj = torch.randn((output_size, input_size), dtype = torch.float32, device = "cuda").to("cpu").contiguous() proj = torch.randn((output_size, input_size), dtype = torch.float32, device = "cuda").to("cpu").contiguous()
if quant_mode == "qint8": if quant_mode == "qint8":
scale, zero_point = 0.1, 0 # Adjust scale and zero_point based on your dataset
proj_q = torch.quantize_per_tensor(proj, scale, zero_point, torch.qint8) proj_q = torch.quantize_per_tensor(proj, scale, zero_point, torch.qint8)
quantized_layer = nnq.Linear(input_size, output_size) quantized_layer = nnq.Linear(input_size, output_size)
quantized_layer.set_weight_bias(proj_q, None) quantized_layer.set_weight_bias(proj_q, None)
projs.append(quantized_layer) projs.append(quantized_layer)
else: else:
projs.append(proj.to(proj_type)) projs.append(proj.to(proj_type))
input = torch.randn((layer_num, qlen, input_size), dtype=torch.bfloat16, device = "cuda").to("cpu").contiguous()
# warm up # warm up
for i in range(warm_up_iter): for i in range(warm_up_iter):
input = torch.randn((1, input_size), dtype=torch.float32).contiguous() if isinstance(projs[i % layer_num], nnq.Linear):
if quant_mode == "qint8": input_q = torch.quantize_per_tensor(input[i % layer_num].to(torch.float32), scale, zero_point, torch.quint8)
input_q = torch.quantize_per_tensor(input, scale, zero_point, torch.quint8) t_output = projs[i % layer_num](input_q)
quantized_layer = projs[i % layer_num]
t_output = quantized_layer(input_q)
else: else:
t_output = torch.mm(input.to(proj_type), projs[i % layer_num].t()) t_output = torch.mm(input[i % layer_num].to(proj_type), projs[i % layer_num].t())
# test # test
total_time = 0 start = time.perf_counter()
for i in range(test_iter): for i in range(test_iter):
input = torch.randn((1, input_size), dtype=torch.float32).contiguous() if isinstance(projs[i % layer_num], nnq.Linear):
start = time.perf_counter() input_q = torch.quantize_per_tensor(input[i % layer_num].to(torch.float32), scale, zero_point, torch.quint8)
if quant_mode == "qint8": t_output = projs[i % layer_num](input_q)
input_q = torch.quantize_per_tensor(input, scale, zero_point, torch.quint8)
quantized_layer = projs[i % layer_num]
t_output = quantized_layer(input_q)
else: else:
t_output = torch.mm(input.to(proj_type), projs[i % layer_num].t()) t_output = torch.mm(input[i % layer_num].to(proj_type), projs[i % layer_num].t())
end = time.perf_counter() end = time.perf_counter()
total_time += end - start total_time = end - start
print('Quant mode: ', quant_mode) print('Quant mode: ', quant_mode)
print('Time(s): ', total_time) print('Time(s): ', total_time)
print('Iteration: ', test_iter) print('Iteration: ', test_iter)
......
...@@ -6,7 +6,7 @@ Author : chenht2022 ...@@ -6,7 +6,7 @@ Author : chenht2022
Date : 2024-07-16 10:43:18 Date : 2024-07-16 10:43:18
Version : 1.0.0 Version : 1.0.0
LastEditors : chenht2022 LastEditors : chenht2022
LastEditTime : 2024-07-25 10:32:55 LastEditTime : 2024-08-06 10:36:04
Copyright (c) 2024 by KVCache.AI, All Rights Reserved. Copyright (c) 2024 by KVCache.AI, All Rights Reserved.
''' '''
import os, sys import os, sys
...@@ -15,15 +15,18 @@ sys.path.append(os.path.dirname(__file__) + '/../build') ...@@ -15,15 +15,18 @@ sys.path.append(os.path.dirname(__file__) + '/../build')
import cpuinfer_ext import cpuinfer_ext
import torch import torch
hidden_size = 5120
intermediate_size = 3072
stride = 16
group_max_len = 1024
layer_num = 10
qlen = 1
CPUInfer = cpuinfer_ext.CPUInfer(64)
warm_up_iter = 1000
test_iter = 10000
def bench_mlp(quant_mode: str): def bench_mlp(quant_mode: str):
with torch.inference_mode(mode=True): with torch.inference_mode(mode=True):
hidden_size = 5120
intermediate_size = 3072
stride = 16
layer_num = 10
CPUInfer = cpuinfer_ext.CPUInfer(64)
warm_up_iter = 1000
test_iter = 10000
hidden_type = 30 # ggml_type::GGML_TYPE_BF16 hidden_type = 30 # ggml_type::GGML_TYPE_BF16
if quant_mode == "fp32": if quant_mode == "fp32":
...@@ -93,32 +96,39 @@ def bench_mlp(quant_mode: str): ...@@ -93,32 +96,39 @@ def bench_mlp(quant_mode: str):
gate_proj = torch.randn((intermediate_size, hidden_size), dtype=torch.float32, device = "cuda").to("cpu").contiguous() gate_proj = torch.randn((intermediate_size, hidden_size), dtype=torch.float32, device = "cuda").to("cpu").contiguous()
up_proj = torch.randn((intermediate_size, hidden_size), dtype=torch.float32, device = "cuda").to("cpu").contiguous() up_proj = torch.randn((intermediate_size, hidden_size), dtype=torch.float32, device = "cuda").to("cpu").contiguous()
down_proj = torch.randn((hidden_size, intermediate_size), dtype=torch.float32, device = "cuda").to("cpu").contiguous() down_proj = torch.randn((hidden_size, intermediate_size), dtype=torch.float32, device = "cuda").to("cpu").contiguous()
config = cpuinfer_ext.mlp.MLPConfig(hidden_size, intermediate_size, stride, gate_proj.data_ptr(), up_proj.data_ptr(), down_proj.data_ptr(), gate_type, up_type, down_type, hidden_type) config = cpuinfer_ext.mlp.MLPConfig(hidden_size, intermediate_size, stride, group_max_len, gate_proj.data_ptr(), up_proj.data_ptr(), down_proj.data_ptr(), gate_type, up_type, down_type, hidden_type)
mlp = cpuinfer_ext.mlp.MLP(config) mlp = cpuinfer_ext.mlp.MLP(config)
gate_projs.append(gate_proj) gate_projs.append(gate_proj)
up_projs.append(up_proj) up_projs.append(up_proj)
down_projs.append(down_proj) down_projs.append(down_proj)
mlps.append(mlp) mlps.append(mlp)
input = torch.randn((layer_num, qlen, hidden_size), dtype=torch.bfloat16, device = "cuda").to("cpu").contiguous()
output = torch.empty((layer_num, qlen, hidden_size), dtype=torch.bfloat16, device = "cuda").to("cpu").contiguous()
# warm up # warm up
for i in range(warm_up_iter): for i in range(warm_up_iter):
mlp = mlps[i % layer_num] CPUInfer.submit(
input = torch.randn((1, hidden_size), dtype=torch.bfloat16).contiguous() mlps[i % layer_num].forward(
output = torch.empty((1, hidden_size), dtype=torch.bfloat16).contiguous() qlen,
CPUInfer.submit(mlp.forward, input.data_ptr(), output.data_ptr()) input[i % layer_num].data_ptr(),
output[i % layer_num].data_ptr()
)
)
CPUInfer.sync() CPUInfer.sync()
# test # test
total_time = 0 start = time.perf_counter()
for i in range(test_iter): for i in range(test_iter):
mlp = mlps[i % layer_num] CPUInfer.submit(
input = torch.randn((1, hidden_size), dtype=torch.bfloat16).contiguous() mlps[i % layer_num].forward(
output = torch.empty((1, hidden_size), dtype=torch.bfloat16).contiguous() qlen,
start = time.perf_counter() input[i % layer_num].data_ptr(),
CPUInfer.submit(mlp.forward, input.data_ptr(), output.data_ptr()) output[i % layer_num].data_ptr()
)
)
CPUInfer.sync() CPUInfer.sync()
end = time.perf_counter() end = time.perf_counter()
total_time += end - start total_time = end - start
print('Quant mode: ', quant_mode) print('Quant mode: ', quant_mode)
print('Time(s): ', total_time) print('Time(s): ', total_time)
print('Iteration: ', test_iter) print('Iteration: ', test_iter)
......
...@@ -14,17 +14,38 @@ import time ...@@ -14,17 +14,38 @@ import time
import torch import torch
import torch.nn.quantized as nnq import torch.nn.quantized as nnq
scale, zero_point = 0.1, 0 # Adjust scale and zero_point based on your dataset
hidden_size = 5120
intermediate_size = 3072
layer_num = 10
qlen = 1
warm_up_iter = 1000
test_iter = 10000
def act_fn(x): def act_fn(x):
return x / (1.0 + torch.exp(-x)) return x / (1.0 + torch.exp(-x))
def mlp_torch(input, gate_proj, up_proj, down_proj):
if isinstance(gate_proj, nnq.Linear):
input_q = torch.quantize_per_tensor(input.to(torch.float32), scale, zero_point, torch.quint8)
gate_buf = gate_proj(input_q)
up_buf = up_proj(input_q)
gate_buf = gate_buf.dequantize()
up_buf = up_buf.dequantize()
intermediate = act_fn(gate_buf) * up_buf
intermediate_q = torch.quantize_per_tensor(intermediate, scale, zero_point, torch.quint8)
expert_output = down_proj(intermediate_q)
ret = expert_output.dequantize()
else:
gate_buf = torch.mm(input.to(gate_proj.dtype), gate_proj.t())
up_buf = torch.mm(input.to(up_proj.dtype), up_proj.t())
intermediate = act_fn(gate_buf) * up_buf
ret = torch.mm(intermediate.to(down_proj.dtype), down_proj.t())
return ret
def bench_mlp(quant_mode: str): def bench_mlp(quant_mode: str):
with torch.inference_mode(mode=True): with torch.inference_mode(mode=True):
hidden_size = 5120
intermediate_size = 3072
layer_num = 10
warm_up_iter = 1000
test_iter = 10000
if quant_mode == "fp32": if quant_mode == "fp32":
proj_type = torch.float32 proj_type = torch.float32
bytes_per_elem = 4.000000 bytes_per_elem = 4.000000
...@@ -48,7 +69,6 @@ def bench_mlp(quant_mode: str): ...@@ -48,7 +69,6 @@ def bench_mlp(quant_mode: str):
up_proj = torch.randn((intermediate_size, hidden_size), dtype=torch.float32, device = "cuda").to("cpu").contiguous() up_proj = torch.randn((intermediate_size, hidden_size), dtype=torch.float32, device = "cuda").to("cpu").contiguous()
down_proj = torch.randn((hidden_size, intermediate_size), dtype=torch.float32, device = "cuda").to("cpu").contiguous() down_proj = torch.randn((hidden_size, intermediate_size), dtype=torch.float32, device = "cuda").to("cpu").contiguous()
if quant_mode == "qint8": if quant_mode == "qint8":
scale, zero_point = 0.1, 0 # Adjust scale and zero_point based on your dataset
gate_proj_q = torch.quantize_per_tensor(gate_proj, scale, zero_point, torch.qint8) gate_proj_q = torch.quantize_per_tensor(gate_proj, scale, zero_point, torch.qint8)
quantized_gate = nnq.Linear(hidden_size, intermediate_size) quantized_gate = nnq.Linear(hidden_size, intermediate_size)
quantized_gate.set_weight_bias(gate_proj_q, None) quantized_gate.set_weight_bias(gate_proj_q, None)
...@@ -65,58 +85,18 @@ def bench_mlp(quant_mode: str): ...@@ -65,58 +85,18 @@ def bench_mlp(quant_mode: str):
gate_projs.append(gate_proj.to(proj_type)) gate_projs.append(gate_proj.to(proj_type))
up_projs.append(up_proj.to(proj_type)) up_projs.append(up_proj.to(proj_type))
down_projs.append(down_proj.to(proj_type)) down_projs.append(down_proj.to(proj_type))
input = torch.randn((layer_num, qlen, hidden_size), dtype=torch.bfloat16, device = "cuda").to("cpu").contiguous()
# warm up # warm up
for i in range(warm_up_iter): for i in range(warm_up_iter):
input = torch.randn((1, hidden_size), dtype=torch.float32).contiguous() mlp_torch(input[i % layer_num], gate_projs[i % layer_num], up_projs[i % layer_num], down_projs[i % layer_num])
if quant_mode == "qint8":
input_q = torch.quantize_per_tensor(input, scale, zero_point, torch.quint8)
quantized_gate = gate_projs[i % layer_num]
gate_buf = quantized_gate(input_q)
quantized_up = up_projs[i % layer_num]
up_buf = quantized_gate(input_q)
gate_buf = gate_buf.dequantize()
up_buf = up_buf.dequantize()
intermediate = act_fn(gate_buf) * up_buf
intermediate_q = torch.quantize_per_tensor(intermediate, scale, zero_point, torch.quint8)
quantized_down = down_projs[i % layer_num]
t_output = quantized_down(intermediate_q)
else:
gate_proj = gate_projs[i%layer_num]
up_proj = up_projs[i%layer_num]
down_proj = down_projs[i%layer_num]
gate_buf = torch.mm(input.to(proj_type), gate_proj.t())
up_buf = torch.mm(input.to(proj_type), up_proj.t())
intermediate = act_fn(gate_buf) * up_buf
t_output = torch.mm(intermediate.to(proj_type), down_proj.t())
# test # test
total_time = 0 start = time.perf_counter()
for i in range(test_iter): for i in range(test_iter):
input = torch.randn((1, hidden_size), dtype=torch.float32).contiguous() mlp_torch(input[i % layer_num], gate_projs[i % layer_num], up_projs[i % layer_num], down_projs[i % layer_num])
start = time.perf_counter() end = time.perf_counter()
if quant_mode == "qint8": total_time = end - start
input_q = torch.quantize_per_tensor(input, scale, zero_point, torch.quint8)
quantized_gate = gate_projs[i % layer_num]
gate_buf = quantized_gate(input_q)
quantized_up = up_projs[i % layer_num]
up_buf = quantized_gate(input_q)
gate_buf = gate_buf.dequantize()
up_buf = up_buf.dequantize()
intermediate = act_fn(gate_buf) * up_buf
intermediate_q = torch.quantize_per_tensor(intermediate, scale, zero_point, torch.quint8)
quantized_down = down_projs[i % layer_num]
t_output = quantized_down(intermediate_q)
else:
gate_proj = gate_projs[i%layer_num]
up_proj = up_projs[i%layer_num]
down_proj = down_projs[i%layer_num]
gate_buf = torch.mm(input.to(proj_type), gate_proj.t())
up_buf = torch.mm(input.to(proj_type), up_proj.t())
intermediate = act_fn(gate_buf) * up_buf
t_output = torch.mm(intermediate.to(proj_type), down_proj.t())
end = time.perf_counter()
total_time += end - start
print('Quant mode: ', quant_mode) print('Quant mode: ', quant_mode)
print('Time(s): ', total_time) print('Time(s): ', total_time)
print('Iteration: ', test_iter) print('Iteration: ', test_iter)
......
...@@ -6,7 +6,7 @@ Author : chenht2022 ...@@ -6,7 +6,7 @@ Author : chenht2022
Date : 2024-07-25 10:32:05 Date : 2024-07-25 10:32:05
Version : 1.0.0 Version : 1.0.0
LastEditors : chenht2022 LastEditors : chenht2022
LastEditTime : 2024-07-25 10:33:00 LastEditTime : 2024-08-06 10:41:28
Copyright (c) 2024 by KVCache.AI, All Rights Reserved. Copyright (c) 2024 by KVCache.AI, All Rights Reserved.
''' '''
import os, sys import os, sys
...@@ -15,21 +15,21 @@ sys.path.append(os.path.dirname(__file__) + '/../build') ...@@ -15,21 +15,21 @@ sys.path.append(os.path.dirname(__file__) + '/../build')
import cpuinfer_ext import cpuinfer_ext
import torch import torch
expert_num = 160
hidden_size = 5120
intermediate_size = 1536
stride = 16
group_min_len = 10
group_max_len = 1024
n_routed_experts = 6
layer_num = 10
qlen = 1
CPUInfer = cpuinfer_ext.CPUInfer(64)
warm_up_iter = 1000
test_iter = 10000
def bench_moe(quant_mode: str): def bench_moe(quant_mode: str):
with torch.inference_mode(mode=True): with torch.inference_mode(mode=True):
expert_num = 10
hidden_size = 5120
intermediate_size = 1536
stride = 16
group_min_len = 10
group_max_len = 1024
n_routed_experts = 6
layer_num = 10
qlen = 1
CPUInfer = cpuinfer_ext.CPUInfer(64)
warm_up_iter = 1000
test_iter = 10000
hidden_type = 30 # ggml_type::GGML_TYPE_BF16 hidden_type = 30 # ggml_type::GGML_TYPE_BF16
if quant_mode == "fp32": if quant_mode == "fp32":
gate_type = 0 # ggml_type::GGML_TYPE_F32 gate_type = 0 # ggml_type::GGML_TYPE_F32
...@@ -104,32 +104,38 @@ def bench_moe(quant_mode: str): ...@@ -104,32 +104,38 @@ def bench_moe(quant_mode: str):
up_projs.append(up_proj) up_projs.append(up_proj)
down_projs.append(down_proj) down_projs.append(down_proj)
moes.append(moe) moes.append(moe)
expert_ids = torch.randint(0, expert_num, (layer_num, qlen, n_routed_experts), dtype=torch.int64, device = "cuda").to("cpu").contiguous() expert_ids = torch.stack([torch.stack([torch.randperm(expert_num, dtype=torch.int64, device = "cuda")[:n_routed_experts] for _ in range(qlen)]) for _ in range(layer_num)]).to("cpu").contiguous()
weights = torch.rand((layer_num, qlen, n_routed_experts), dtype=torch.float32, device = "cuda").to("cpu").contiguous() weights = torch.rand((layer_num, qlen, n_routed_experts), dtype=torch.float32, device = "cuda").to("cpu").contiguous()
input = torch.randn((layer_num, qlen, hidden_size), dtype=torch.bfloat16, device = "cuda").to("cpu").contiguous() input = torch.randn((layer_num, qlen, hidden_size), dtype=torch.bfloat16, device = "cuda").to("cpu").contiguous()
output = torch.empty((layer_num, qlen, hidden_size), dtype=torch.bfloat16, device = "cuda").to("cpu").contiguous() output = torch.empty((layer_num, qlen, hidden_size), dtype=torch.bfloat16, device = "cuda").to("cpu").contiguous()
# warm up # warm up
for i in range(warm_up_iter): for i in range(warm_up_iter):
CPUInfer.submit(moes[i % layer_num].forward, CPUInfer.submit(
qlen, moes[i % layer_num].forward(
n_routed_experts, qlen,
expert_ids[i % layer_num].data_ptr(), n_routed_experts,
weights[i % layer_num].data_ptr(), expert_ids[i % layer_num].data_ptr(),
input[i % layer_num].data_ptr(), weights[i % layer_num].data_ptr(),
output[i % layer_num].data_ptr()) input[i % layer_num].data_ptr(),
output[i % layer_num].data_ptr()
)
)
CPUInfer.sync() CPUInfer.sync()
# test # test
start = time.perf_counter() start = time.perf_counter()
for i in range(test_iter): for i in range(test_iter):
CPUInfer.submit(moes[i % layer_num].forward, CPUInfer.submit(
qlen, moes[i % layer_num].forward(
n_routed_experts, qlen,
expert_ids[i % layer_num].data_ptr(), n_routed_experts,
weights[i % layer_num].data_ptr(), expert_ids[i % layer_num].data_ptr(),
input[i % layer_num].data_ptr(), weights[i % layer_num].data_ptr(),
output[i % layer_num].data_ptr()) input[i % layer_num].data_ptr(),
output[i % layer_num].data_ptr()
)
)
CPUInfer.sync() CPUInfer.sync()
end = time.perf_counter() end = time.perf_counter()
total_time = end - start total_time = end - start
......
...@@ -14,19 +14,71 @@ import time ...@@ -14,19 +14,71 @@ import time
import torch import torch
import torch.nn.quantized as nnq import torch.nn.quantized as nnq
scale, zero_point = 0.1, 0 # Adjust scale and zero_point based on your dataset
expert_num = 160
hidden_size = 5120
intermediate_size = 1536
n_routed_experts = 6
layer_num = 10
qlen = 1
warm_up_iter = 1000
test_iter = 10000
def act_fn(x): def act_fn(x):
return x / (1.0 + torch.exp(-x)) return x / (1.0 + torch.exp(-x))
def mlp_torch(input, gate_proj, up_proj, down_proj):
if isinstance(gate_proj, nnq.Linear):
input_q = torch.quantize_per_tensor(input.to(torch.float32), scale, zero_point, torch.quint8)
gate_buf = gate_proj(input_q)
up_buf = up_proj(input_q)
gate_buf = gate_buf.dequantize()
up_buf = up_buf.dequantize()
intermediate = act_fn(gate_buf) * up_buf
intermediate_q = torch.quantize_per_tensor(intermediate, scale, zero_point, torch.quint8)
expert_output = down_proj(intermediate_q)
ret = expert_output.dequantize()
else:
gate_buf = torch.mm(input.to(gate_proj.dtype), gate_proj.t())
up_buf = torch.mm(input.to(up_proj.dtype), up_proj.t())
intermediate = act_fn(gate_buf) * up_buf
ret = torch.mm(intermediate.to(down_proj.dtype), down_proj.t())
return ret
def moe_torch(input, expert_ids, weights, gate_proj, up_proj, down_proj):
cnts = expert_ids.new_zeros((expert_ids.shape[0], expert_num))
cnts.scatter_(1, expert_ids, 1)
tokens_per_expert = cnts.sum(dim=0)
idxs = expert_ids.view(-1).argsort()
sorted_tokens = input[idxs // expert_ids.shape[1]]
outputs = []
start_idx = 0
for i, num_tokens in enumerate(tokens_per_expert):
end_idx = start_idx + num_tokens
if num_tokens == 0:
continue
tokens_for_this_expert = sorted_tokens[start_idx:end_idx]
expert_out = mlp_torch(tokens_for_this_expert, gate_proj[i], up_proj[i], down_proj[i])
outputs.append(expert_out)
start_idx = end_idx
outs = torch.cat(outputs, dim=0) if len(outputs) else sorted_tokens.new_empty(0)
new_x = torch.empty_like(outs)
new_x[idxs] = outs
t_output = (
new_x.view(*expert_ids.shape, -1)
.type(weights.dtype)
.mul_(weights.unsqueeze(dim=-1))
.sum(dim=1)
.type(new_x.dtype)
)
return t_output
def bench_moe(quant_mode: str): def bench_moe(quant_mode: str):
with torch.inference_mode(mode=True): with torch.inference_mode(mode=True):
expert_num = 10
hidden_size = 5120
intermediate_size = 1536
n_routed_experts = 6
layer_num = 10
warm_up_iter = 1000
test_iter = 10000
if quant_mode == "fp32": if quant_mode == "fp32":
proj_type = torch.float32 proj_type = torch.float32
bytes_per_elem = 4.000000 bytes_per_elem = 4.000000
...@@ -50,7 +102,6 @@ def bench_moe(quant_mode: str): ...@@ -50,7 +102,6 @@ def bench_moe(quant_mode: str):
up_proj = torch.randn((expert_num, intermediate_size, hidden_size), dtype=torch.float32, device = "cuda").to("cpu").contiguous() up_proj = torch.randn((expert_num, intermediate_size, hidden_size), dtype=torch.float32, device = "cuda").to("cpu").contiguous()
down_proj = torch.randn((expert_num, hidden_size, intermediate_size), dtype=torch.float32, device = "cuda").to("cpu").contiguous() down_proj = torch.randn((expert_num, hidden_size, intermediate_size), dtype=torch.float32, device = "cuda").to("cpu").contiguous()
if quant_mode == "qint8": if quant_mode == "qint8":
scale, zero_point = 0.1, 0 # Adjust scale and zero_point based on your dataset
quantized_gate_proj = [] quantized_gate_proj = []
quantized_up_proj = [] quantized_up_proj = []
quantized_down_proj = [] quantized_down_proj = []
...@@ -74,82 +125,20 @@ def bench_moe(quant_mode: str): ...@@ -74,82 +125,20 @@ def bench_moe(quant_mode: str):
gate_projs.append(gate_proj.to(proj_type)) gate_projs.append(gate_proj.to(proj_type))
up_projs.append(up_proj.to(proj_type)) up_projs.append(up_proj.to(proj_type))
down_projs.append(down_proj.to(proj_type)) down_projs.append(down_proj.to(proj_type))
expert_ids = torch.stack([torch.stack([torch.randperm(expert_num, dtype=torch.int64, device = "cuda")[:n_routed_experts] for _ in range(qlen)]) for _ in range(layer_num)]).to("cpu").contiguous()
weights = torch.rand((layer_num, qlen, n_routed_experts), dtype=torch.float32, device = "cuda").to("cpu").contiguous()
input = torch.randn((layer_num, qlen, hidden_size), dtype=torch.bfloat16, device = "cuda").to("cpu").contiguous()
# warm up # warm up
for i in range(warm_up_iter): for i in range(warm_up_iter):
expert_ids = torch.randint(0, expert_num, (n_routed_experts,), dtype=torch.int64).contiguous() moe_torch(input[i % layer_num], expert_ids[i % layer_num], weights[i % layer_num], gate_projs[i % layer_num], up_projs[i % layer_num], down_projs[i % layer_num])
weights = torch.rand((n_routed_experts,), dtype=torch.float32).contiguous()
input = torch.randn((1, hidden_size), dtype=torch.float32).contiguous()
if quant_mode == "qint8":
input_q = torch.quantize_per_tensor(input, scale, zero_point, torch.quint8)
t_output = torch.zeros((1, hidden_size), dtype=torch.float32).contiguous()
gate_proj = gate_projs[i%layer_num]
up_proj = up_projs[i%layer_num]
down_proj = down_projs[i%layer_num]
for i, expert_id in enumerate(expert_ids):
quantized_gate = gate_proj[expert_id]
gate_buf = quantized_gate(input_q)
quantized_up = up_proj[expert_id]
up_buf = quantized_up(input_q)
gate_buf = gate_buf.dequantize()
up_buf = up_buf.dequantize()
intermediate = act_fn(gate_buf) * up_buf
intermediate_q = torch.quantize_per_tensor(intermediate, scale, zero_point, torch.quint8)
quantized_down = down_proj[expert_id]
expert_output = quantized_down(intermediate_q)
expert_output = expert_output.dequantize()
t_output += weights[i] * expert_output
else:
t_output = torch.zeros((1, hidden_size), dtype=proj_type).contiguous()
gate_proj = gate_projs[i%layer_num]
up_proj = up_projs[i%layer_num]
down_proj = down_projs[i%layer_num]
for i, expert_id in enumerate(expert_ids):
gate_buf = torch.mm(input.to(proj_type), gate_proj[expert_id].t())
up_buf = torch.mm(input.to(proj_type), up_proj[expert_id].t())
intermediate = act_fn(gate_buf) * up_buf
expert_output = torch.mm(intermediate.to(proj_type), down_proj[expert_id].t())
t_output += weights[i] * expert_output
# test # test
total_time = 0 start = time.perf_counter()
for i in range(test_iter): for i in range(test_iter):
expert_ids = torch.randint(0, expert_num, (n_routed_experts,), dtype=torch.int64).contiguous() moe_torch(input[i % layer_num], expert_ids[i % layer_num], weights[i % layer_num], gate_projs[i % layer_num], up_projs[i % layer_num], down_projs[i % layer_num])
weights = torch.rand((n_routed_experts,), dtype=torch.float32).contiguous() end = time.perf_counter()
input = torch.randn((1, hidden_size), dtype=torch.float32).contiguous() total_time = end - start
start = time.perf_counter()
if quant_mode == "qint8":
input_q = torch.quantize_per_tensor(input, scale, zero_point, torch.quint8)
t_output = torch.zeros((1, hidden_size), dtype=torch.float32).contiguous()
gate_proj = gate_projs[i%layer_num]
up_proj = up_projs[i%layer_num]
down_proj = down_projs[i%layer_num]
for i, expert_id in enumerate(expert_ids):
quantized_gate = gate_proj[expert_id]
gate_buf = quantized_gate(input_q)
quantized_up = up_proj[expert_id]
up_buf = quantized_up(input_q)
gate_buf = gate_buf.dequantize()
up_buf = up_buf.dequantize()
intermediate = act_fn(gate_buf) * up_buf
intermediate_q = torch.quantize_per_tensor(intermediate, scale, zero_point, torch.quint8)
quantized_down = down_proj[expert_id]
expert_output = quantized_down(intermediate_q)
expert_output = expert_output.dequantize()
t_output += weights[i] * expert_output
else:
t_output = torch.zeros((1, hidden_size), dtype=proj_type).contiguous()
gate_proj = gate_projs[i%layer_num]
up_proj = up_projs[i%layer_num]
down_proj = down_projs[i%layer_num]
for i, expert_id in enumerate(expert_ids):
gate_buf = torch.mm(input.to(proj_type), gate_proj[expert_id].t())
up_buf = torch.mm(input.to(proj_type), up_proj[expert_id].t())
intermediate = act_fn(gate_buf) * up_buf
expert_output = torch.mm(intermediate.to(proj_type), down_proj[expert_id].t())
t_output += weights[i] * expert_output
end = time.perf_counter()
total_time += end - start
print('Quant mode: ', quant_mode) print('Quant mode: ', quant_mode)
print('Time(s): ', total_time) print('Time(s): ', total_time)
print('Iteration: ', test_iter) print('Iteration: ', test_iter)
......
/** /**
* @Description : * @Description :
* @Author : chenht2022 * @Author : chenht2022
* @Date : 2024-07-16 10:43:18 * @Date : 2024-07-16 10:43:18
* @Version : 1.0.0 * @Version : 1.0.0
* @LastEditors : chenht2022 * @LastEditors : chenht2022
* @LastEditTime : 2024-07-25 10:33:42 * @LastEditTime : 2024-08-07 09:47:43
* @Copyright (c) 2024 by KVCache.AI, All Rights Reserved. * @Copyright (c) 2024 by KVCache.AI, All Rights Reserved.
**/ **/
#ifndef CPUINFER_CPUINFER_H #ifndef CPUINFER_CPUINFER_H
#define CPUINFER_CPUINFER_H #define CPUINFER_CPUINFER_H
...@@ -17,6 +17,7 @@ ...@@ -17,6 +17,7 @@
#include <queue> #include <queue>
#include <thread> #include <thread>
#include <vector> #include <vector>
#include "cuda_runtime.h"
#include "backend.h" #include "backend.h"
#include "task_queue.h" #include "task_queue.h"
...@@ -39,16 +40,39 @@ class CPUInfer { ...@@ -39,16 +40,39 @@ class CPUInfer {
} }
template <typename Func, typename Obj, typename... Args> template <typename Func, typename Obj, typename... Args>
void submit(Func f, Obj* obj, Args... args) { void enqueue(Func f, Obj* obj, Args... args) {
task_queue_->enqueue([=]() { task_queue_->enqueue([=]() {
std::invoke(f, *obj, args..., backend_); std::invoke(f, *obj, args..., backend_);
}); });
} }
void submit(std::pair<intptr_t, intptr_t> params) {
void (*func)(void*) = (void (*)(void*))params.first;
void* args = (void*)params.second;
*((CPUInfer**)args) = this;
func(args);
}
void sync() { void sync() {
task_queue_->sync(); task_queue_->sync();
} }
void submit_with_cuda_stream(intptr_t user_cuda_stream, std::pair<intptr_t, intptr_t> params) {
void (*func)(void*) = (void (*)(void*))params.first;
void* args = (void*)params.second;
*((CPUInfer**)args) = this;
cudaLaunchHostFunc((cudaStream_t)user_cuda_stream, (cudaHostFn_t)func, args);
}
static void sync_(void* cpu_infer_ptr) {
CPUInfer* cpuinfer = (CPUInfer*)cpu_infer_ptr;
cpuinfer->sync();
}
void sync_with_cuda_stream(intptr_t user_cuda_stream) {
cudaLaunchHostFunc((cudaStream_t)user_cuda_stream, (cudaHostFn_t)&sync_, (void*)this);
}
public: public:
Backend* backend_; Backend* backend_;
TaskQueue* task_queue_; TaskQueue* task_queue_;
......
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
* @Date : 2024-07-16 10:43:18 * @Date : 2024-07-16 10:43:18
* @Version : 1.0.0 * @Version : 1.0.0
* @LastEditors : chenxl * @LastEditors : chenxl
* @LastEditTime : 2024-08-08 04:23:51 * @LastEditTime : 2024-08-12 12:28:25
* @Copyright (c) 2024 by KVCache.AI, All Rights Reserved. * @Copyright (c) 2024 by KVCache.AI, All Rights Reserved.
**/ **/
#ifndef CPUINFER_TASKQUEUE_H #ifndef CPUINFER_TASKQUEUE_H
......
...@@ -3,8 +3,8 @@ ...@@ -3,8 +3,8 @@
* @Author : Azure-Tang * @Author : Azure-Tang
* @Date : 2024-07-25 13:38:30 * @Date : 2024-07-25 13:38:30
* @Version : 1.0.0 * @Version : 1.0.0
* @LastEditors : Azure * @LastEditors : kkk1nak0
* @LastEditTime : 2024-07-26 08:36:03 * @LastEditTime : 2024-08-12 03:05:04
* @Copyright (c) 2024 by KVCache.AI, All Rights Reserved. * @Copyright (c) 2024 by KVCache.AI, All Rights Reserved.
**/ **/
...@@ -23,8 +23,14 @@ PYBIND11_MODULE(KTransformersOps, m) { ...@@ -23,8 +23,14 @@ PYBIND11_MODULE(KTransformersOps, m) {
py::arg("data"), py::arg("blk_size"), py::arg("device")); py::arg("data"), py::arg("blk_size"), py::arg("device"));
m.def("dequantize_q6_k", &dequantize_q6_k, "Function to dequantize q6_k data.", m.def("dequantize_q6_k", &dequantize_q6_k, "Function to dequantize q6_k data.",
py::arg("data"), py::arg("blk_size"), py::arg("device")); py::arg("data"), py::arg("blk_size"), py::arg("device"));
m.def("dequantize_q5_k", &dequantize_q5_k, "Function to dequantize q5_k data.",
py::arg("data"), py::arg("blk_size"), py::arg("device"));
m.def("dequantize_q4_k", &dequantize_q4_k, "Function to dequantize q4_k data.", m.def("dequantize_q4_k", &dequantize_q4_k, "Function to dequantize q4_k data.",
py::arg("data"), py::arg("blk_size"), py::arg("device")); py::arg("data"), py::arg("blk_size"), py::arg("device"));
m.def("dequantize_q3_k", &dequantize_q3_k, "Function to dequantize q3_k data.",
py::arg("data"), py::arg("blk_size"), py::arg("device"));
m.def("dequantize_q2_k", &dequantize_q2_k, "Function to dequantize q2_k data.",
py::arg("data"), py::arg("blk_size"), py::arg("device"));
m.def("gptq_marlin_gemm", &gptq_marlin_gemm, "Function to perform GEMM using Marlin quantization.", m.def("gptq_marlin_gemm", &gptq_marlin_gemm, "Function to perform GEMM using Marlin quantization.",
py::arg("a"), py::arg("b_q_weight"), py::arg("b_scales"), py::arg("g_idx"), py::arg("a"), py::arg("b_q_weight"), py::arg("b_scales"), py::arg("g_idx"),
py::arg("perm"), py::arg("workspace"), py::arg("num_bits"), py::arg("size_m"), py::arg("perm"), py::arg("workspace"), py::arg("num_bits"), py::arg("size_m"),
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment