recognition.md 17.5 KB
Newer Older
tink2123's avatar
tink2123 committed
1
2
## 文字识别

WenmuZhou's avatar
WenmuZhou committed
3

WenmuZhou's avatar
WenmuZhou committed
4
5
6
7
8
- [1 数据准备](#数据准备)
    - [1.1 自定义数据集](#自定义数据集)
    - [1.2 数据下载](#数据下载)
    - [1.3 字典](#字典)  
    - [1.4 支持空格](#支持空格)
WenmuZhou's avatar
WenmuZhou committed
9

WenmuZhou's avatar
WenmuZhou committed
10
11
12
13
- [2 启动训练](#启动训练)
    - [2.1 数据增强](#数据增强)
    - [2.2 训练](#训练)
    - [2.3 小语种](#小语种)
WenmuZhou's avatar
WenmuZhou committed
14

WenmuZhou's avatar
WenmuZhou committed
15
- [3 评估](#评估)
WenmuZhou's avatar
WenmuZhou committed
16

WenmuZhou's avatar
WenmuZhou committed
17
18
- [4 预测](#预测)
    - [4.1 训练引擎预测](#训练引擎预测)
WenmuZhou's avatar
WenmuZhou committed
19
20
21


<a name="数据准备"></a>
WenmuZhou's avatar
WenmuZhou committed
22
### 1. 数据准备
tink2123's avatar
tink2123 committed
23
24


WenmuZhou's avatar
WenmuZhou committed
25
26
27
PaddleOCR 支持两种数据格式:
 - `lmdb` 用于训练以lmdb格式存储的数据集;
 - `通用数据` 用于训练以文本文件存储的数据集:
tink2123's avatar
tink2123 committed
28
29
30
31

训练数据的默认存储路径是 `PaddleOCR/train_data`,如果您的磁盘上已有数据集,只需创建软链接至数据集目录:

```
WenmuZhou's avatar
WenmuZhou committed
32
# linux and mac os
33
ln -sf <path/to/dataset> <path/to/paddle_ocr>/train_data/dataset
WenmuZhou's avatar
WenmuZhou committed
34
35
# windows
mklink /d <path/to/paddle_ocr>/train_data/dataset <path/to/dataset>
tink2123's avatar
tink2123 committed
36
37
```

WenmuZhou's avatar
WenmuZhou committed
38
39
40
<a name="准备数据集"></a>
#### 1.1 自定义数据集
下面以通用数据集为例, 介绍如何准备数据集:
tink2123's avatar
tink2123 committed
41

WenmuZhou's avatar
WenmuZhou committed
42
* 训练集
tink2123's avatar
tink2123 committed
43

WenmuZhou's avatar
WenmuZhou committed
44
建议将训练图片放入同一个文件夹,并用一个txt文件(rec_gt_train.txt)记录图片路径和标签,txt文件里的内容如下:
WenmuZhou's avatar
WenmuZhou committed
45

WenmuZhou's avatar
WenmuZhou committed
46
**注意:** txt文件中默认请将图片路径和图片标签用 \t 分割,如用其他方式分割将造成训练报错。
tink2123's avatar
tink2123 committed
47

WenmuZhou's avatar
WenmuZhou committed
48
49
```
" 图像文件名                 图像标注信息 "
tink2123's avatar
tink2123 committed
50

WenmuZhou's avatar
WenmuZhou committed
51
52
train_data/rec/train/word_001.jpg   简单可依赖
train_data/rec/train/word_002.jpg   用科技让复杂的世界更简单
WenmuZhou's avatar
WenmuZhou committed
53
54
...
```
tink2123's avatar
tink2123 committed
55

WenmuZhou's avatar
WenmuZhou committed
56
57
58
最终训练集应有如下文件结构:
```
|-train_data
WenmuZhou's avatar
WenmuZhou committed
59
  |-rec
WenmuZhou's avatar
WenmuZhou committed
60
61
62
63
64
65
    |- rec_gt_train.txt
    |- train
        |- word_001.png
        |- word_002.jpg
        |- word_003.jpg
        | ...
tink2123's avatar
tink2123 committed
66
67
```

WenmuZhou's avatar
WenmuZhou committed
68
69
70
71
72
73
- 测试集

同训练集类似,测试集也需要提供一个包含所有图片的文件夹(test)和一个rec_gt_test.txt,测试集的结构如下所示:

```
|-train_data
WenmuZhou's avatar
WenmuZhou committed
74
  |-rec
WenmuZhou's avatar
WenmuZhou committed
75
76
77
78
79
80
    |- rec_gt_test.txt
    |- test
        |- word_001.jpg
        |- word_002.jpg
        |- word_003.jpg
        | ...
tink2123's avatar
tink2123 committed
81
```
WenmuZhou's avatar
WenmuZhou committed
82
83
84
85
86
87
88
89

<a name="数据下载"></a>

1.2 数据下载

若您本地没有数据集,可以在官网下载 [icdar2015](http://rrc.cvc.uab.es/?ch=4&com=downloads) 数据,用于快速验证。也可以参考[DTRB](https://github.com/clovaai/deep-text-recognition-benchmark#download-lmdb-dataset-for-traininig-and-evaluation-from-here) ,下载 benchmark 所需的lmdb格式数据集。

如果你使用的是icdar2015的公开数据集,PaddleOCR 提供了一份用于训练 icdar2015 数据集的标签文件,通过以下方式下载:
tink2123's avatar
fix doc  
tink2123 committed
90

WenmuZhou's avatar
WenmuZhou committed
91
如果希望复现SRN的论文指标,需要下载离线[增广数据](https://pan.baidu.com/s/1-HSZ-ZVdqBF2HaBZ5pRAKA),提取码: y3ry。增广数据是由MJSynth和SynthText做旋转和扰动得到的。数据下载完成后请解压到 {your_path}/PaddleOCR/train_data/data_lmdb_release/training/ 路径下。
tink2123's avatar
fix doc  
tink2123 committed
92

93
94
如果希望复现SAR的论文指标,需要下载[SynthAdd](https://pan.baidu.com/share/init?surl=uV0LtoNmcxbO-0YA7Ch4dg), 提取码:627x。此外,真实数据集icdar2013, icdar2015, cocotext, IIIT5也作为训练数据的一部分。具体数据细节可以参考论文SAR。

tink2123's avatar
fix doc  
tink2123 committed
95
96
97
98
```
# 训练集标签
wget -P ./train_data/ic15_data  https://paddleocr.bj.bcebos.com/dataset/rec_gt_train.txt
# 测试集标签
tink2123's avatar
tink2123 committed
99
wget -P ./train_data/ic15_data  https://paddleocr.bj.bcebos.com/dataset/rec_gt_test.txt
tink2123's avatar
fix doc  
tink2123 committed
100
```
tink2123's avatar
tink2123 committed
101

tink2123's avatar
tink2123 committed
102
PaddleOCR 也提供了数据格式转换脚本,可以将官网 label 转换支持的数据格式。 数据转换工具在 `ppocr/utils/gen_label.py`, 这里以训练集为例:
WenmuZhou's avatar
WenmuZhou committed
103
104
105
106
107
108
109

```
# 将官网下载的标签文件转换为 rec_gt_label.txt
python gen_label.py --mode="rec" --input_path="{path/of/origin/label}" --output_label="rec_gt_label.txt"
```

<a name="字典"></a>
WenmuZhou's avatar
WenmuZhou committed
110
1.3 字典
tink2123's avatar
tink2123 committed
111
112
113

最后需要提供一个字典({word_dict_name}.txt),使模型在训练时,可以将所有出现的字符映射为字典的索引。

tink2123's avatar
tink2123 committed
114
因此字典需要包含所有希望被正确识别的字符,{word_dict_name}.txt需要写成如下格式,并以 `utf-8` 编码格式保存:
tink2123's avatar
tink2123 committed
115

tink2123's avatar
tink2123 committed
116
117
```
l
tink2123's avatar
tink2123 committed
118
119
d
a
tink2123's avatar
tink2123 committed
120
121
d
r
tink2123's avatar
tink2123 committed
122
n
tink2123's avatar
tink2123 committed
123
```
tink2123's avatar
tink2123 committed
124
125
126

word_dict.txt 每行有一个单字,将字符与数字索引映射在一起,“and” 将被映射成 [2 5 1]

WenmuZhou's avatar
WenmuZhou committed
127
128
129
130
* 内置字典

PaddleOCR内置了一部分字典,可以按需使用。

tink2123's avatar
tink2123 committed
131
`ppocr/utils/ppocr_keys_v1.txt` 是一个包含6623个字符的中文字典
WenmuZhou's avatar
WenmuZhou committed
132

tink2123's avatar
tink2123 committed
133
`ppocr/utils/ic15_dict.txt` 是一个包含36个字符的英文字典
WenmuZhou's avatar
WenmuZhou committed
134
135
136

`ppocr/utils/dict/french_dict.txt` 是一个包含118个字符的法文字典

137
`ppocr/utils/dict/japan_dict.txt` 是一个包含4399个字符的日文字典
WenmuZhou's avatar
WenmuZhou committed
138

139
`ppocr/utils/dict/korean_dict.txt` 是一个包含3636个字符的韩文字典
WenmuZhou's avatar
WenmuZhou committed
140

141
`ppocr/utils/dict/german_dict.txt` 是一个包含131个字符的德文字典
WenmuZhou's avatar
WenmuZhou committed
142

tink2123's avatar
tink2123 committed
143
`ppocr/utils/en_dict.txt` 是一个包含96个字符的英文字典
tink2123's avatar
tink2123 committed
144

WenmuZhou's avatar
WenmuZhou committed
145

WenmuZhou's avatar
WenmuZhou committed
146

tink2123's avatar
tink2123 committed
147

WenmuZhou's avatar
WenmuZhou committed
148
目前的多语言模型仍处在demo阶段,会持续优化模型并补充语种,**非常欢迎您为我们提供其他语言的字典和字体**
littletomatodonkey's avatar
fix doc  
littletomatodonkey committed
149
如您愿意可将字典文件提交至 [dict](../../ppocr/utils/dict),我们会在Repo中感谢您。
WenmuZhou's avatar
WenmuZhou committed
150

tink2123's avatar
tink2123 committed
151
- 自定义字典
tink2123's avatar
tink2123 committed
152

tink2123's avatar
tink2123 committed
153
154
155
如需自定义dic文件,请在 `configs/rec/rec_icdar15_train.yml` 中添加 `character_dict_path` 字段, 指向您的字典路径。
并将 `character_type` 设置为 `ch`

WenmuZhou's avatar
WenmuZhou committed
156
<a name="支持空格"></a>
WenmuZhou's avatar
WenmuZhou committed
157
1.4 添加空格类别
tink2123's avatar
tink2123 committed
158

xmy0916's avatar
xmy0916 committed
159
如果希望支持识别"空格"类别, 请将yml文件中的 `use_space_char` 字段设置为 `True`
tink2123's avatar
tink2123 committed
160

tink2123's avatar
tink2123 committed
161

WenmuZhou's avatar
WenmuZhou committed
162
<a name="启动训练"></a>
WenmuZhou's avatar
WenmuZhou committed
163
### 2. 启动训练
tink2123's avatar
tink2123 committed
164

tink2123's avatar
tink2123 committed
165
PaddleOCR提供了训练脚本、评估脚本和预测脚本,本节将以 CRNN 识别模型为例:
tink2123's avatar
tink2123 committed
166

tink2123's avatar
tink2123 committed
167
首先下载pretrain model,您可以下载训练好的模型在 icdar2015 数据上进行finetune
tink2123's avatar
tink2123 committed
168
169

```
tink2123's avatar
tink2123 committed
170
171
cd PaddleOCR/
# 下载MobileNetV3的预训练模型
tink2123's avatar
tink2123 committed
172
wget -P ./pretrain_models/ https://paddleocr.bj.bcebos.com/dygraph_v2.0/en/rec_mv3_none_bilstm_ctc_v2.0_train.tar
tink2123's avatar
tink2123 committed
173
174
# 解压模型参数
cd pretrain_models
tink2123's avatar
tink2123 committed
175
tar -xf rec_mv3_none_bilstm_ctc_v2.0_train.tar && rm -rf rec_mv3_none_bilstm_ctc_v2.0_train.tar
tink2123's avatar
tink2123 committed
176
177
178
179
```

开始训练:

tink2123's avatar
tink2123 committed
180
181
*如果您安装的是cpu版本,请将配置文件中的 `use_gpu` 字段修改为false*

tink2123's avatar
tink2123 committed
182
```
xmy0916's avatar
xmy0916 committed
183
# GPU训练 支持单卡,多卡训练,通过--gpus参数指定卡号
tink2123's avatar
tink2123 committed
184
# 训练icdar15英文数据 训练日志会自动保存为 "{save_model_dir}" 下的train.log
xmy0916's avatar
xmy0916 committed
185
python3 -m paddle.distributed.launch --gpus '0,1,2,3'  tools/train.py -c configs/rec/rec_icdar15_train.yml
tink2123's avatar
tink2123 committed
186
```
WenmuZhou's avatar
WenmuZhou committed
187
<a name="数据增强"></a>
WenmuZhou's avatar
WenmuZhou committed
188
#### 2.1 数据增强
tink2123's avatar
tink2123 committed
189

littletomatodonkey's avatar
littletomatodonkey committed
190
PaddleOCR提供了多种数据增强方式,默认配置文件中已经添加了数据增广。
tink2123's avatar
tink2123 committed
191

littletomatodonkey's avatar
littletomatodonkey committed
192
默认的扰动方式有:颜色空间转换(cvtColor)、模糊(blur)、抖动(jitter)、噪声(Gasuss noise)、随机切割(random crop)、透视(perspective)、颜色反转(reverse)、TIA数据增广。
tink2123's avatar
tink2123 committed
193

littletomatodonkey's avatar
littletomatodonkey committed
194
训练过程中每种扰动方式以40%的概率被选择,具体代码实现请参考:[rec_img_aug.py](../../ppocr/data/imaug/rec_img_aug.py)
tink2123's avatar
tink2123 committed
195

WenmuZhou's avatar
WenmuZhou committed
196
*由于OpenCV的兼容性问题,扰动操作暂时只支持Linux*
tink2123's avatar
tink2123 committed
197

WenmuZhou's avatar
WenmuZhou committed
198
<a name="训练"></a>
WenmuZhou's avatar
WenmuZhou committed
199
#### 2.2 训练
tink2123's avatar
tink2123 committed
200

tink2123's avatar
tink2123 committed
201
PaddleOCR支持训练和评估交替进行, 可以在 `configs/rec/rec_icdar15_train.yml` 中修改 `eval_batch_step` 设置评估频率,默认每500个iter评估一次。评估过程中默认将最佳acc模型,保存为 `output/rec_CRNN/best_accuracy`
tink2123's avatar
tink2123 committed
202
203
204

如果验证集很大,测试将会比较耗时,建议减少评估次数,或训练完再进行评估。

MissPenguin's avatar
MissPenguin committed
205
**提示:** 可通过 -c 参数选择 `configs/rec/` 路径下的多种模型配置进行训练,PaddleOCR支持的识别算法有:
tink2123's avatar
tink2123 committed
206
207
208
209


| 配置文件 |  算法名称 |   backbone |   trans   |   seq      |     pred     |
| :--------: |  :-------:   | :-------:  |   :-------:   |   :-----:   |  :-----:   |
xmy0916's avatar
xmy0916 committed
210
211
| [rec_chinese_lite_train_v2.0.yml](../../configs/rec/ch_ppocr_v2.0/rec_chinese_lite_train_v2.0.yml) |  CRNN |   Mobilenet_v3 small 0.5 |  None   |  BiLSTM |  ctc  |
| [rec_chinese_common_train_v2.0.yml](../../configs/rec/ch_ppocr_v2.0/rec_chinese_common_train_v2.0.yml) |  CRNN | ResNet34_vd |  None   |  BiLSTM |  ctc  |
tink2123's avatar
tink2123 committed
212
213
214
215
216
| rec_icdar15_train.yml |  CRNN |   Mobilenet_v3 large 0.5 |  None   |  BiLSTM |  ctc  |
| rec_mv3_none_bilstm_ctc.yml |  CRNN |   Mobilenet_v3 large 0.5 |  None   |  BiLSTM |  ctc  |
| rec_mv3_none_none_ctc.yml |  Rosetta |   Mobilenet_v3 large 0.5 |  None   |  None |  ctc  |
| rec_r34_vd_none_bilstm_ctc.yml |  CRNN |   Resnet34_vd |  None   |  BiLSTM |  ctc  |
| rec_r34_vd_none_none_ctc.yml |  Rosetta |   Resnet34_vd |  None   |  None |  ctc  |
LDOUBLEV's avatar
LDOUBLEV committed
217
218
| rec_mv3_tps_bilstm_att.yml |  CRNN |   Mobilenet_v3 |  TPS   |  BiLSTM |  att  |
| rec_r34_vd_tps_bilstm_att.yml |  CRNN |   Resnet34_vd |  TPS   |  BiLSTM |  att  |
tink2123's avatar
tink2123 committed
219
| rec_r50fpn_vd_none_srn.yml    | SRN | Resnet50_fpn_vd    | None    | rnn | srn |
Topdu's avatar
Topdu committed
220
| rec_mtb_nrtr.yml    | NRTR | nrtr_mtb    | None    | transformer encoder | transformer decoder |
andyjpaddle's avatar
andyjpaddle committed
221
| rec_r31_sar.yml               | SAR | ResNet31 | None | LSTM encoder | LSTM decoder |
tink2123's avatar
tink2123 committed
222

xmy0916's avatar
xmy0916 committed
223
训练中文数据,推荐使用[rec_chinese_lite_train_v2.0.yml](../../configs/rec/ch_ppocr_v2.0/rec_chinese_lite_train_v2.0.yml),如您希望尝试其他算法在中文数据集上的效果,请参考下列说明修改配置文件:
tink2123's avatar
tink2123 committed
224

xmy0916's avatar
xmy0916 committed
225
`rec_chinese_lite_train_v2.0.yml` 为例:
tink2123's avatar
tink2123 committed
226
227
228
```
Global:
  ...
xmy0916's avatar
xmy0916 committed
229
230
  # 添加自定义字典,如修改字典请将路径指向新字典
  character_dict_path: ppocr/utils/ppocr_keys_v1.txt
tink2123's avatar
tink2123 committed
231
232
233
  # 修改字符类型
  character_type: ch
  ...
xmy0916's avatar
xmy0916 committed
234
  # 识别空格
xmy0916's avatar
xmy0916 committed
235
  use_space_char: True
tink2123's avatar
tink2123 committed
236

237
238
239
240

Optimizer:
  ...
  # 添加学习率衰减策略
xmy0916's avatar
xmy0916 committed
241
242
243
244
245
246
247
248
249
  lr:
    name: Cosine
    learning_rate: 0.001
  ...

...

Train:
  dataset:
MissPenguin's avatar
MissPenguin committed
250
    # 数据集格式,支持LMDBDataSet以及SimpleDataSet
xmy0916's avatar
xmy0916 committed
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
    name: SimpleDataSet
    # 数据集路径
    data_dir: ./train_data/
    # 训练集标签文件
    label_file_list: ["./train_data/train_list.txt"]
    transforms:
      ...
      - RecResizeImg:
          # 修改 image_shape 以适应长文本
          image_shape: [3, 32, 320]
      ...
  loader:
    ...
    # 单卡训练的batch_size
    batch_size_per_card: 256
    ...

Eval:
  dataset:
MissPenguin's avatar
MissPenguin committed
270
    # 数据集格式,支持LMDBDataSet以及SimpleDataSet
xmy0916's avatar
xmy0916 committed
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
    name: SimpleDataSet
    # 数据集路径
    data_dir: ./train_data
    # 验证集标签文件
    label_file_list: ["./train_data/val_list.txt"]
    transforms:
      ...
      - RecResizeImg:
          # 修改 image_shape 以适应长文本
          image_shape: [3, 32, 320]
      ...
  loader:
    # 单卡验证的batch_size
    batch_size_per_card: 256
    ...
tink2123's avatar
tink2123 committed
286
```
tink2123's avatar
tink2123 committed
287
**注意,预测/评估时的配置文件请务必与训练一致。**
tink2123's avatar
tink2123 committed
288

WenmuZhou's avatar
WenmuZhou committed
289
<a name="小语种"></a>
WenmuZhou's avatar
WenmuZhou committed
290
#### 2.3 小语种
WenmuZhou's avatar
WenmuZhou committed
291

tink2123's avatar
tink2123 committed
292
PaddleOCR目前已支持80种(除中文外)语种识别,`configs/rec/multi_languages` 路径下提供了一个多语言的配置文件模版: [rec_multi_language_lite_train.yml](../../configs/rec/multi_language/rec_multi_language_lite_train.yml)
tink2123's avatar
tink2123 committed
293

tink2123's avatar
tink2123 committed
294
您有两种方式创建所需的配置文件:
tink2123's avatar
tink2123 committed
295

tink2123's avatar
tink2123 committed
296
297
298
299
300
301
302
1. 通过脚本自动生成

[generate_multi_language_configs.py](../../configs/rec/multi_language/generate_multi_language_configs.py) 可以帮助您生成多语言模型的配置文件

- 以意大利语为例,如果您的数据是按如下格式准备的:
    ```
    |-train_data
tink2123's avatar
tink2123 committed
303
        |- it_train.txt # 训练集标签
tink2123's avatar
tink2123 committed
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
        |- it_val.txt # 验证集标签
        |- data
            |- word_001.jpg
            |- word_002.jpg
            |- word_003.jpg
            | ...
    ```

    可以使用默认参数,生成配置文件:

    ```bash
    # 该代码需要在指定目录运行
    cd PaddleOCR/configs/rec/multi_language/
    # 通过-l或者--language参数设置需要生成的语种的配置文件,该命令会将默认参数写入配置文件
    python3 generate_multi_language_configs.py -l it
    ```

- 如果您的数据放置在其他位置,或希望使用自己的字典,可以通过指定相关参数来生成配置文件:

    ```bash
    # -l或者--language字段是必须的
tink2123's avatar
tink2123 committed
325
    # --train修改训练集,--val修改验证集,--data_dir修改数据集目录,--dict修改字典路径, -o修改对应默认参数
tink2123's avatar
tink2123 committed
326
327
328
329
330
331
332
333
334
335
336
    cd PaddleOCR/configs/rec/multi_language/
    python3 generate_multi_language_configs.py -l it \  # 语种
    --train {path/of/train_label.txt} \ # 训练标签文件的路径
    --val {path/of/val_label.txt} \     # 验证集标签文件的路径
    --data_dir {train_data/path} \      # 训练数据的根目录
    --dict {path/of/dict} \             # 字典文件路径
    -o Global.use_gpu=False             # 是否使用gpu
    ...

    ```

tink2123's avatar
tink2123 committed
337
338
意大利文由拉丁字母组成,因此执行完命令后会得到名为 rec_latin_lite_train.yml 的配置文件。

tink2123's avatar
tink2123 committed
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
2. 手动修改配置文件

   您也可以手动修改模版中的以下几个字段:

   ```
    Global:
      use_gpu: True
      epoch_num: 500
      ...
      character_type: it  # 需要识别的语种
      character_dict_path:  {path/of/dict} # 字典文件所在路径

   Train:
      dataset:
        name: SimpleDataSet
        data_dir: train_data/ # 数据存放根目录
        label_file_list: ["./train_data/train_list.txt"] # 训练集label路径
      ...

   Eval:
      dataset:
        name: SimpleDataSet
        data_dir: train_data/ # 数据存放根目录
        label_file_list: ["./train_data/val_list.txt"] # 验证集label路径
      ...

   ```

目前PaddleOCR支持的多语言算法有:

| 配置文件 |  算法名称 |   backbone |   trans   |   seq      |     pred     |  language | character_type |
| :--------: |  :-------:   | :-------:  |   :-------:   |   :-----:   |  :-----:   | :-----:  | :-----:  |
| rec_chinese_cht_lite_train.yml |  CRNN |   Mobilenet_v3 small 0.5 |  None   |  BiLSTM |  ctc  | 中文繁体  | chinese_cht|
tink2123's avatar
tink2123 committed
372
| rec_en_lite_train.yml |  CRNN |   Mobilenet_v3 small 0.5 |  None   |  BiLSTM |  ctc  | 英语(区分大小写)   | EN |
tink2123's avatar
tink2123 committed
373
374
375
376
| rec_french_lite_train.yml |  CRNN |   Mobilenet_v3 small 0.5 |  None   |  BiLSTM |  ctc  | 法语 |  french |
| rec_ger_lite_train.yml |  CRNN |   Mobilenet_v3 small 0.5 |  None   |  BiLSTM |  ctc  | 德语   | german |
| rec_japan_lite_train.yml |  CRNN |   Mobilenet_v3 small 0.5 |  None   |  BiLSTM |  ctc  | 日语  | japan |
| rec_korean_lite_train.yml |  CRNN |   Mobilenet_v3 small 0.5 |  None   |  BiLSTM |  ctc  | 韩语  | korean |
tink2123's avatar
tink2123 committed
377
378
379
380
381
382
| rec_latin_lite_train.yml |  CRNN |   Mobilenet_v3 small 0.5 |  None   |  BiLSTM |  ctc  | 拉丁字母  | latin |
| rec_arabic_lite_train.yml |  CRNN |   Mobilenet_v3 small 0.5 |  None   |  BiLSTM |  ctc  | 阿拉伯字母 |  ar |
| rec_cyrillic_lite_train.yml |  CRNN |   Mobilenet_v3 small 0.5 |  None   |  BiLSTM |  ctc  | 斯拉夫字母  | cyrillic |
| rec_devanagari_lite_train.yml |  CRNN |   Mobilenet_v3 small 0.5 |  None   |  BiLSTM |  ctc  | 梵文字母  | devanagari |

更多支持语种请参考: [多语言模型](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.1/doc/doc_ch/multi_languages.md#%E8%AF%AD%E7%A7%8D%E7%BC%A9%E5%86%99)
WenmuZhou's avatar
WenmuZhou committed
383

littletomatodonkey's avatar
littletomatodonkey committed
384
385
386
多语言模型训练方式与中文模型一致,训练数据集均为100w的合成数据,少量的字体可以通过下面两种方式下载。
* [百度网盘](https://pan.baidu.com/s/1bS_u207Rm7YbY33wOECKDA)。提取码:frgi。
* [google drive](https://drive.google.com/file/d/18cSWX7wXSy4G0tbKJ0d9PuIaiwRLHpjA/view)
WenmuZhou's avatar
WenmuZhou committed
387
388
389
390
391
392
393
394
395
396

如您希望在现有模型效果的基础上调优,请参考下列说明修改配置文件:

`rec_french_lite_train` 为例:
```
Global:
  ...
  # 添加自定义字典,如修改字典请将路径指向新字典
  character_dict_path: ./ppocr/utils/dict/french_dict.txt
  ...
xmy0916's avatar
xmy0916 committed
397
  # 识别空格
xmy0916's avatar
xmy0916 committed
398
  use_space_char: True
WenmuZhou's avatar
WenmuZhou committed
399
400

...
xmy0916's avatar
xmy0916 committed
401
402
403

Train:
  dataset:
MissPenguin's avatar
MissPenguin committed
404
    # 数据集格式,支持LMDBDataSet以及SimpleDataSet
xmy0916's avatar
xmy0916 committed
405
406
407
408
409
410
411
412
413
    name: SimpleDataSet
    # 数据集路径
    data_dir: ./train_data/
    # 训练集标签文件
    label_file_list: ["./train_data/french_train.txt"]
    ...

Eval:
  dataset:
MissPenguin's avatar
MissPenguin committed
414
    # 数据集格式,支持LMDBDataSet以及SimpleDataSet
xmy0916's avatar
xmy0916 committed
415
416
417
418
419
420
    name: SimpleDataSet
    # 数据集路径
    data_dir: ./train_data
    # 验证集标签文件
    label_file_list: ["./train_data/french_val.txt"]
    ...
WenmuZhou's avatar
WenmuZhou committed
421
422
```
<a name="评估"></a>
WenmuZhou's avatar
WenmuZhou committed
423
### 3 评估
tink2123's avatar
tink2123 committed
424

xmy0916's avatar
xmy0916 committed
425
评估数据集可以通过 `configs/rec/rec_icdar15_train.yml`  修改Eval中的 `label_file_path` 设置。
tink2123's avatar
tink2123 committed
426
427

```
tink2123's avatar
tink2123 committed
428
# GPU 评估, Global.checkpoints 为待测权重
xmy0916's avatar
xmy0916 committed
429
python3 -m paddle.distributed.launch --gpus '0' tools/eval.py -c configs/rec/rec_icdar15_train.yml -o Global.checkpoints={path/to/weights}/best_accuracy
tink2123's avatar
tink2123 committed
430
431
```

WenmuZhou's avatar
WenmuZhou committed
432
<a name="预测"></a>
WenmuZhou's avatar
WenmuZhou committed
433
### 4 预测
tink2123's avatar
tink2123 committed
434

WenmuZhou's avatar
WenmuZhou committed
435
<a name="训练引擎预测"></a>
WenmuZhou's avatar
WenmuZhou committed
436
#### 4.1 训练引擎的预测
tink2123's avatar
tink2123 committed
437

tink2123's avatar
tink2123 committed
438
使用 PaddleOCR 训练好的模型,可以通过以下脚本进行快速预测。
tink2123's avatar
tink2123 committed
439

tink2123's avatar
tink2123 committed
440
默认预测图片存储在 `infer_img` 里,通过 `-o Global.checkpoints` 指定权重:
tink2123's avatar
tink2123 committed
441
442

```
tink2123's avatar
tink2123 committed
443
# 预测英文结果
WenmuZhou's avatar
WenmuZhou committed
444
python3 tools/infer_rec.py -c configs/rec/rec_icdar15_train.yml -o Global.pretrained_model={path/to/weights}/best_accuracy Global.load_static_weights=false Global.infer_img=doc/imgs_words/en/word_1.png
tink2123's avatar
tink2123 committed
445
```
tink2123's avatar
tink2123 committed
446
447
448

预测图片:

449
![](../imgs_words/en/word_1.png)
tink2123's avatar
tink2123 committed
450
451
452
453

得到输入图像的预测结果:

```
tink2123's avatar
tink2123 committed
454
infer_img: doc/imgs_words/en/word_1.png
tink2123's avatar
tink2123 committed
455
        result: ('joint', 0.9998967)
tink2123's avatar
tink2123 committed
456
457
```

xmy0916's avatar
xmy0916 committed
458
预测使用的配置文件必须与训练一致,如您通过 `python3 tools/train.py -c configs/rec/ch_ppocr_v2.0/rec_chinese_lite_train_v2.0.yml` 完成了中文模型的训练,
tink2123's avatar
tink2123 committed
459
460
461
462
您可以使用如下命令进行中文模型预测。

```
# 预测中文结果
WenmuZhou's avatar
WenmuZhou committed
463
python3 tools/infer_rec.py -c configs/rec/ch_ppocr_v2.0/rec_chinese_lite_train_v2.0.yml -o Global.pretrained_model={path/to/weights}/best_accuracy Global.load_static_weights=false Global.infer_img=doc/imgs_words/ch/word_1.jpg
tink2123's avatar
tink2123 committed
464
465
```

tink2123's avatar
tink2123 committed
466
预测图片:
tink2123's avatar
tink2123 committed
467

468
![](../imgs_words/ch/word_1.jpg)
xiaoting's avatar
xiaoting committed
469

tink2123's avatar
tink2123 committed
470
471
472
得到输入图像的预测结果:

```
tink2123's avatar
tink2123 committed
473
infer_img: doc/imgs_words/ch/word_1.jpg
tink2123's avatar
tink2123 committed
474
        result: ('韩国小馆', 0.997218)
tink2123's avatar
tink2123 committed
475
```