Quantizer.md 5.42 KB
Newer Older
Chi Song's avatar
Chi Song committed
1
2
3
4
5
6
7
8
# 支持的量化算法

支持的量化算法列表
* [Naive Quantizer](#naive-quantizer)
* [QAT Quantizer](#qat-quantizer)
* [DoReFa Quantizer](#dorefa-quantizer)
* [BNN Quantizer](#bnn-quantizer)

Chi Song's avatar
Chi Song committed
9
10
11
12
13
## Naive Quantizer

Naive Quantizer 将 Quantizer 权重默认设置为 8 位,可用它来测试量化算法。

### 用法
Chi Song's avatar
Chi Song committed
14
PyTorch
Chi Song's avatar
Chi Song committed
15
16
```python 
model = nni.compression.torch.NaiveQuantizer(model).compress()
Chi Song's avatar
Chi Song committed
17
18
19
20
21
22
23
24
25
26
27
28
29
30
```

***

## QAT Quantizer
[Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference](http://openaccess.thecvf.com/content_cvpr_2018/papers/Jacob_Quantization_and_Training_CVPR_2018_paper.pdf) 中,作者 Benoit Jacob 和 Skirmantas Kligys 提出了一种算法在训练中量化模型。
> 我们提出了一种方法,在训练的前向过程中模拟量化效果。 此方法不影响反向传播,所有权重和偏差都使用了浮点数保存,因此能很容易的进行量化。 然后,前向传播通过实现浮点算法的舍入操作,来在推理引擎中模拟量化的推理。 * 权重在与输入卷积操作前进行量化。 如果在层中使用了批量归一化(参考 [17]),批量归一化参数会被在量化前被“折叠”到权重中。 * 激活操作在推理时会被量化,例如,在激活函数被应用到卷积或全连接层输出之后,或在增加旁路连接,或连接多个层的输出之后(如:ResNet)。 Activations are quantized at points where they would be during inference, e.g. after the activation function is applied to a convolutional or fully connected layer’s output, or after a bypass connection adds or concatenates the outputs of several layers together such as in ResNets.


### 用法
可在训练代码前将模型量化为 8 位。

PyTorch 代码
```python
Chi Song's avatar
Chi Song committed
31
from nni.compression.torch import QAT_Quantizer
Chi Song's avatar
Chi Song committed
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
model = Mnist()

config_list = [{
    'quant_types': ['weight'],
    'quant_bits': {
        'weight': 8,
    }, # 这里可以仅使用 `int`,因为所有 `quan_types` 使用了一样的位长,参考下方 `ReLu6` 配置。
    'op_types':['Conv2d', 'Linear']
}, {
    'quant_types': ['output'],
    'quant_bits': 8,
    'quant_start_step': 7000,
    'op_types':['ReLU6']
}]
quantizer = QAT_Quantizer(model, config_list)
quantizer.compress()
Chi Song's avatar
Chi Song committed
48
49
50
51
52
```

查看示例进一步了解

#### QAT Quantizer 的用户配置
Chi Song's avatar
Chi Song committed
53
54

压缩算法的公共配置可在 [`config_list` 说明](./QuickStart.md)中找到。
Chi Song's avatar
Chi Song committed
55
56
57
58
59
60

此算法所需的配置:

* **quant_start_step:** int

在运行到某步骤前,对模型禁用量化。这让网络在进入更稳定的 状态后再激活量化,这样不会配除掉一些分数显著的值,默认为 0
Chi Song's avatar
Chi Song committed
61

Chi Song's avatar
Chi Song committed
62
### 注意
Chi Song's avatar
Chi Song committed
63

Chi Song's avatar
Chi Song committed
64
当前不支持批处理规范化折叠。
Chi Song's avatar
Chi Song committed
65

Chi Song's avatar
Chi Song committed
66
67
68
***

## DoReFa Quantizer
Chi Song's avatar
Chi Song committed
69

Chi Song's avatar
Chi Song committed
70
71
72
[DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients](https://arxiv.org/abs/1606.06160) 中,作者 Shuchang Zhou 和 Yuxin Wu 提出了 DoReFa 算法在训练时量化权重,激活函数和梯度。

### 用法
Chi Song's avatar
Chi Song committed
73

Chi Song's avatar
Chi Song committed
74
75
76
77
要实现 DoReFa Quantizer,在训练代码前加入以下代码。

PyTorch 代码
```python
Chi Song's avatar
Chi Song committed
78
79
80
81
82
83
from nni.compression.torch import DoReFaQuantizer
config_list = [{ 
    'quant_types': ['weight'],
    'quant_bits': 8, 
    'op_types': 'default' 
}]
Chi Song's avatar
Chi Song committed
84
85
quantizer = DoReFaQuantizer(model, config_list)
quantizer.compress()
Chi Song's avatar
Chi Song committed
86
87
88
89
```

查看示例进一步了解

Chi Song's avatar
Chi Song committed
90
#### DoReFa Quantizer 的用户配置
Chi Song's avatar
Chi Song committed
91
92

压缩算法的公共配置可在 [`config_list` 说明](./QuickStart.md)中找到。
Chi Song's avatar
Chi Song committed
93
94
95

此算法所需的配置:

Chi Song's avatar
Chi Song committed
96
***
Chi Song's avatar
Chi Song committed
97
98

## BNN Quantizer
Chi Song's avatar
Chi Song committed
99

Chi Song's avatar
Chi Song committed
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
[Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1](https://arxiv.org/abs/1602.02830) 中,
> 引入了一种训练二进制神经网络(BNN)的方法 - 神经网络在运行时使用二进制权重。 在训练时,二进制权重和激活用于计算参数梯度。 在 forward 过程中,BNN 会大大减少内存大小和访问,并将大多数算术运算替换为按位计算,可显著提高能源效率。


### 用法

PyTorch 代码
```python
from nni.compression.torch import BNNQuantizer
model = VGG_Cifar10(num_classes=10)

configure_list = [{
    'quant_bits': 1,
    'quant_types': ['weight'],
    'op_types': ['Conv2d', 'Linear'],
    'op_names': ['features.0', 'features.3', 'features.7', 'features.10', 'features.14', 'features.17', 'classifier.0', 'classifier.3']
}, {
    'quant_bits': 1,
    'quant_types': ['output'],
    'op_types': ['Hardtanh'],
    'op_names': ['features.6', 'features.9', 'features.13', 'features.16', 'features.20', 'classifier.2', 'classifier.5']
}]

quantizer = BNNQuantizer(model, configure_list)
model = quantizer.compress()
```

可以查看示例 [examples/model_compress/BNN_quantizer_cifar10.py](https://github.com/microsoft/nni/tree/master/examples/model_compress/BNN_quantizer_cifar10.py) 了解更多信息。

#### BNN Quantizer 的用户配置
Chi Song's avatar
Chi Song committed
130
131

压缩算法的公共配置可在 [`config_list` 说明](./QuickStart.md)中找到。
Chi Song's avatar
Chi Song committed
132
133
134
135

此算法所需的配置:

### 实验
Chi Song's avatar
Chi Song committed
136
137

我们实现了 [Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1](https://arxiv.org/abs/1602.02830) 中的一个实验,对 CIFAR-10 上的 **VGGNet** 进行了量化操作。 实验结果如下:
Chi Song's avatar
Chi Song committed
138
139
140
141
142
143
144

| 模型     | 精度     |
| ------ | ------ |
| VGGNet | 86.93% |


实验代码可在 [examples/model_compress/BNN_quantizer_cifar10.py](https://github.com/microsoft/nni/tree/master/examples/model_compress/BNN_quantizer_cifar10.py)