README.md 13.7 KB
Newer Older
dyning's avatar
dyning committed
1
English | [简体中文](README_cn.md)
2

dyning's avatar
dyning committed
3
4
## Introduction
PaddleOCR aims to create rich, leading, and practical OCR tools that help users train better models and apply them into practice.
dyning's avatar
dyning committed
5

dyning's avatar
dyning committed
6
**Recent updates**
licx's avatar
licx committed
7
- 2020.8.16 Release text detection algorithm [SAST](https://arxiv.org/abs/1908.05498) and text recognition algorithm [SRN](https://arxiv.org/abs/2003.12294)
dyning's avatar
dyning committed
8
- 2020.7.23, Release the playback and PPT of live class on BiliBili station, PaddleOCR Introduction, [address](https://aistudio.baidu.com/aistudio/course/introduce/1519)
dyning's avatar
dyning committed
9
- 2020.7.15, Add mobile App demo , support both iOS and  Android  ( based on easyedge and Paddle Lite)
dyning's avatar
dyning committed
10
- 2020.7.15, Improve the  deployment ability, add the C + +  inference , serving deployment. In addtion, the benchmarks of the ultra-lightweight OCR model are provided.
dyning's avatar
dyning committed
11
12
- 2020.7.15, Add several related datasets, data annotation and synthesis tools.
- [more](./doc/doc_en/update_en.md)
dyning's avatar
dyning committed
13

MissPenguin's avatar
MissPenguin committed
14
## Features
dyning's avatar
dyning committed
15
- Ultra-lightweight OCR model, total model size is only 8.6M
dyning's avatar
dyning committed
16
    - Single model supports Chinese/English numbers combination recognition, vertical text recognition, long text recognition
dyning's avatar
dyning committed
17
18
19
20
    - Detection model DB (4.1M) + recognition model CRNN (4.5M)
- Various text detection algorithms: EAST, DB
- Various text recognition algorithms: Rosetta, CRNN, STAR-Net, RARE
- Support Linux, Windows, MacOS and other systems.
dyning's avatar
dyning committed
21

dyning's avatar
dyning committed
22
## Visualization
tink2123's avatar
tink2123 committed
23

dyning's avatar
dyning committed
24
![](doc/imgs_results/11.jpg)
LDOUBLEV's avatar
LDOUBLEV committed
25

dyning's avatar
dyning committed
26
![](doc/imgs_results/img_10.jpg)
dyning's avatar
dyning committed
27

dyning's avatar
dyning committed
28
[More visualization](./doc/doc_en/visualization_en.md)
dyning's avatar
dyning committed
29

dyning's avatar
dyning committed
30
You can also quickly experience the ultra-lightweight OCR : [Online Experience](https://www.paddlepaddle.org.cn/hub/scene/ocr)
dyning's avatar
dyning committed
31

dyning's avatar
dyning committed
32
33
34
Mobile DEMO experience (based on EasyEdge and Paddle-Lite, supports iOS and Android systems): [Sign in the website to obtain the QR code for  installing the App](https://ai.baidu.com/easyedge/app/openSource?from=paddlelite)

 Also, you can scan the QR code blow to install the App (**Android support only**)
dyning's avatar
dyning committed
35
36
37
38
39

<div align="center">
<img src="./doc/ocr-android-easyedge.png"  width = "200" height = "200" />
</div>

dyning's avatar
dyning committed
40
- [**OCR Quick Start**](./doc/doc_en/quickstart_en.md)
dyning's avatar
dyning committed
41

dyning's avatar
dyning committed
42
<a name="Supported-Chinese-model-list"></a>
dyning's avatar
dyning committed
43

dyning's avatar
dyning committed
44
### Supported Models:
dyning's avatar
dyning committed
45

dyning's avatar
dyning committed
46
|Model Name|Description |Detection Model link|Recognition Model link| Support for space Recognition Model link|
dyning's avatar
dyning committed
47
|-|-|-|-|-|
dyning's avatar
dyning committed
48
49
|db_crnn_mobile|ultra-lightweight OCR model|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_enhance_infer.tar) / [pre-train model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_enhance.tar)
|db_crnn_server|General OCR model|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_det_r50_vd_db_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_det_r50_vd_db.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_enhance_infer.tar) / [pre-train model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_enhance.tar)
dyning's avatar
dyning committed
50
51
52
53
54
55
56
57
58
59
60
61
62
63


## Tutorials
- [Installation](./doc/doc_en/installation_en.md)
- [Quick Start](./doc/doc_en/quickstart_en.md)
- Algorithm introduction
    - [Text Detection Algorithm](#TEXTDETECTIONALGORITHM)
    - [Text Recognition Algorithm](#TEXTRECOGNITIONALGORITHM)
    - [END-TO-END OCR Algorithm](#ENDENDOCRALGORITHM)
- Model training/evaluation
    - [Text Detection](./doc/doc_en/detection_en.md)
    - [Text Recognition](./doc/doc_en/recognition_en.md)
    - [Yml Configuration](./doc/doc_en/config_en.md)
    - [Tricks](./doc/doc_en/tricks_en.md)
dyning's avatar
dyning committed
64
- Deployment
dyning's avatar
dyning committed
65
66
67
68
69
70
    - [Python Inference](./doc/doc_en/inference_en.md)
    - [C++ Inference](./deploy/cpp_infer/readme_en.md)
    - [Serving](./doc/doc_en/serving_en.md)
    - [Mobile](./deploy/lite/readme_en.md)
    - Model Quantization and Compression (coming soon)
    - [Benchmark](./doc/doc_en/benchmark_en.md)
dyning's avatar
dyning committed
71
- Datasets
dyning's avatar
dyning committed
72
73
74
75
76
    - [General OCR Datasets(Chinese/English)](./doc/doc_en/datasets_en.md)
    - [HandWritten_OCR_Datasets(Chinese)](./doc/doc_en/handwritten_datasets_en.md)
    - [Various OCR Datasets(multilingual)](./doc/doc_en/vertical_and_multilingual_datasets_en.md)
    - [Data Annotation Tools](./doc/doc_en/data_annotation_en.md)
    - [Data Synthesis Tools](./doc/doc_en/data_synthesis_en.md)
dyning's avatar
dyning committed
77
- [FAQ](#FAQ)
dyning's avatar
dyning committed
78
79
80
81
- Visualization
    - [Ultra-lightweight Chinese/English OCR Visualization](#UCOCRVIS)
    - [General Chinese/English OCR Visualization](#GeOCRVIS)
    - [Chinese/English OCR Visualization (Support Space Recognization )](#SpaceOCRVIS)
MissPenguin's avatar
MissPenguin committed
82
83
84
85
- [Community](#Community)
- [References](./doc/doc_en/reference_en.md)
- [License](#LICENSE)
- [Contribution](#CONTRIBUTION)
dyning's avatar
dyning committed
86
87
88
89
90

<a name="TEXTDETECTIONALGORITHM"></a>
## Text Detection Algorithm

PaddleOCR open source text detection algorithms list:
tink2123's avatar
tink2123 committed
91
- [x]  EAST([paper](https://arxiv.org/abs/1704.03155))
tink2123's avatar
fix url  
tink2123 committed
92
- [x]  DB([paper](https://arxiv.org/abs/1911.08947))
licx's avatar
licx committed
93
- [x]  SAST([paper](https://arxiv.org/abs/1908.05498))(Baidu Self-Research)
tink2123's avatar
tink2123 committed
94

dyning's avatar
dyning committed
95
On the ICDAR2015 dataset, the text detection result is as follows:
tink2123's avatar
tink2123 committed
96

dyning's avatar
dyning committed
97
|Model|Backbone|precision|recall|Hmean|Download link|
98
|-|-|-|-|-|-|
dyning's avatar
dyning committed
99
100
101
102
|EAST|ResNet50_vd|88.18%|85.51%|86.82%|[Download link](https://paddleocr.bj.bcebos.com/det_r50_vd_east.tar)|
|EAST|MobileNetV3|81.67%|79.83%|80.74%|[Download link](https://paddleocr.bj.bcebos.com/det_mv3_east.tar)|
|DB|ResNet50_vd|83.79%|80.65%|82.19%|[Download link](https://paddleocr.bj.bcebos.com/det_r50_vd_db.tar)|
|DB|MobileNetV3|75.92%|73.18%|74.53%|[Download link](https://paddleocr.bj.bcebos.com/det_mv3_db.tar)|
licx's avatar
licx committed
103
|SAST|ResNet50_vd|92.18%|82.96%|87.33%|[Download link](https://paddleocr.bj.bcebos.com/SAST/sast_r50_vd_icdar2015.tar)|
LDOUBLEV's avatar
LDOUBLEV committed
104

dyning's avatar
dyning committed
105
For use of [LSVT](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_en/datasets_en.md#1-icdar2019-lsvt) street view dataset with a total of 3w training data,the related configuration and pre-trained models for text detection task are as follows:
dyning's avatar
dyning committed
106
|Model|Backbone|Configuration file|Pre-trained model|
tink2123's avatar
tink2123 committed
107
|-|-|-|-|
dyning's avatar
dyning committed
108
109
|ultra-lightweight OCR model|MobileNetV3|det_mv3_db.yml|[Download link](https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db.tar)|
|General OCR model|ResNet50_vd|det_r50_vd_db.yml|[Download link](https://paddleocr.bj.bcebos.com/ch_models/ch_det_r50_vd_db.tar)|
tink2123's avatar
tink2123 committed
110

dyning's avatar
dyning committed
111
* Note: For the training and evaluation of the above DB model, post-processing parameters box_thresh=0.6 and unclip_ratio=1.5 need to be set. If using different datasets and different models for training, these two parameters can be adjusted for better result.
tink2123's avatar
tink2123 committed
112

dyning's avatar
dyning committed
113
For the training guide and use of PaddleOCR text detection algorithms, please refer to the document [Text detection model training/evaluation/prediction](./doc/doc_en/detection_en.md)
tink2123's avatar
tink2123 committed
114

dyning's avatar
dyning committed
115
116
<a name="TEXTRECOGNITIONALGORITHM"></a>
## Text Recognition Algorithm
tink2123's avatar
tink2123 committed
117

dyning's avatar
dyning committed
118
PaddleOCR open-source text recognition algorithms list:
tink2123's avatar
tink2123 committed
119
120
121
122
- [x]  CRNN([paper](https://arxiv.org/abs/1507.05717))
- [x]  Rosetta([paper](https://arxiv.org/abs/1910.05085))
- [x]  STAR-Net([paper](http://www.bmva.org/bmvc/2016/papers/paper043/index.html))
- [x]  RARE([paper](https://arxiv.org/abs/1603.03915v1))
licx's avatar
licx committed
123
- [x]  SRN([paper](https://arxiv.org/abs/2003.12294))(Baidu Self-Research)
tink2123's avatar
tink2123 committed
124

dyning's avatar
dyning committed
125
Refer to [DTRB](https://arxiv.org/abs/1904.01906), the training and evaluation result of these above text recognition (using MJSynth and SynthText for training, evaluate on IIIT, SVT, IC03, IC13, IC15, SVTP, CUTE) is as follow:
tink2123's avatar
tink2123 committed
126

dyning's avatar
dyning committed
127
|Model|Backbone|Avg Accuracy|Module combination|Download link|
dyning's avatar
dyning committed
128
|-|-|-|-|-|
dyning's avatar
dyning committed
129
130
131
132
133
134
135
136
|Rosetta|Resnet34_vd|80.24%|rec_r34_vd_none_none_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_r34_vd_none_none_ctc.tar)|
|Rosetta|MobileNetV3|78.16%|rec_mv3_none_none_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_mv3_none_none_ctc.tar)|
|CRNN|Resnet34_vd|82.20%|rec_r34_vd_none_bilstm_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_r34_vd_none_bilstm_ctc.tar)|
|CRNN|MobileNetV3|79.37%|rec_mv3_none_bilstm_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_mv3_none_bilstm_ctc.tar)|
|STAR-Net|Resnet34_vd|83.93%|rec_r34_vd_tps_bilstm_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_r34_vd_tps_bilstm_ctc.tar)|
|STAR-Net|MobileNetV3|81.56%|rec_mv3_tps_bilstm_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_mv3_tps_bilstm_ctc.tar)|
|RARE|Resnet34_vd|84.90%|rec_r34_vd_tps_bilstm_attn|[Download link](https://paddleocr.bj.bcebos.com/rec_r34_vd_tps_bilstm_attn.tar)|
|RARE|MobileNetV3|83.32%|rec_mv3_tps_bilstm_attn|[Download link](https://paddleocr.bj.bcebos.com/rec_mv3_tps_bilstm_attn.tar)|
licx's avatar
licx committed
137
138
139
140
141
|SRN|Resnet50_vd_fpn|88.33%|rec_r50fpn_vd_none_srn|[Download link](https://paddleocr.bj.bcebos.com/SRN/rec_r50fpn_vd_none_srn.tar)|

**Note:** SRN model uses data expansion method to expand the two training sets mentioned above, and the expanded data can be downloaded from [Baidu Drive](todo).

The average accuracy of the two-stage training in the original paper is 89.74%, and that of one stage training in paddleocr is 88.33%. Both pre-trained weights can be downloaded [here](https://paddleocr.bj.bcebos.com/SRN/rec_r50fpn_vd_none_srn.tar).
dyning's avatar
dyning committed
142

dyning's avatar
dyning committed
143
We use [LSVT](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_en/datasets_en.md#1-icdar2019-lsvt) dataset and cropout 30w  traning data from original photos by using position groundtruth and make some calibration needed. In addition, based on the LSVT corpus, 500w synthetic data is generated to train the model. The related configuration and pre-trained models are as follows:
licx's avatar
licx committed
144

dyning's avatar
dyning committed
145
|Model|Backbone|Configuration file|Pre-trained model|
tink2123's avatar
tink2123 committed
146
|-|-|-|-|
dyning's avatar
dyning committed
147
148
|ultra-lightweight OCR model|MobileNetV3|rec_chinese_lite_train.yml|[Download link](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_enhance_infer.tar) & [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_enhance.tar)|
|General OCR model|Resnet34_vd|rec_chinese_common_train.yml|[Download link](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_enhance_infer.tar) & [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_enhance.tar)|
tink2123's avatar
tink2123 committed
149

dyning's avatar
dyning committed
150
Please refer to the document for training guide and use of PaddleOCR text recognition algorithms [Text recognition model training/evaluation/prediction](./doc/doc_en/recognition_en.md)
tink2123's avatar
tink2123 committed
151

dyning's avatar
dyning committed
152
153
154
<a name="ENDENDOCRALGORITHM"></a>
## END-TO-END OCR Algorithm
- [ ]  [End2End-PSL](https://arxiv.org/abs/1909.07808)(Baidu Self-Research, comming soon)
tink2123's avatar
tink2123 committed
155

dyning's avatar
dyning committed
156
## Visualization
dyning's avatar
dyning committed
157

dyning's avatar
dyning committed
158
159
<a name="UCOCRVIS"></a>
### 1.Ultra-lightweight Chinese/English OCR Visualization [more](./doc/doc_en/visualization_en.md)
tink2123's avatar
tink2123 committed
160

dyning's avatar
dyning committed
161
<div align="center">
dyning's avatar
dyning committed
162
    <img src="doc/imgs_results/1.jpg" width="800">
dyning's avatar
dyning committed
163
</div>
tink2123's avatar
tink2123 committed
164

dyning's avatar
dyning committed
165
166
<a name="GeOCRVIS"></a>
### 2. General Chinese/English OCR Visualization [more](./doc/doc_en/visualization_en.md)
dyning's avatar
dyning committed
167
168
169
170

<div align="center">
    <img src="doc/imgs_results/chinese_db_crnn_server/11.jpg" width="800">
</div>
171

dyning's avatar
dyning committed
172
173
<a name="SpaceOCRVIS"></a>
### 3.Chinese/English OCR Visualization (Space_support) [more](./doc/doc_en/visualization_en.md)
tink2123's avatar
tink2123 committed
174

dyning's avatar
dyning committed
175
176
177
<div align="center">
    <img src="doc/imgs_results/chinese_db_crnn_server/en_paper.jpg" width="800">
</div>
tink2123's avatar
tink2123 committed
178

dyning's avatar
dyning committed
179
<a name="FAQ"></a>
dyning's avatar
dyning committed
180

dyning's avatar
dyning committed
181
## FAQ
dyning's avatar
dyning committed
182
183
184
185
186
187
188
189
190
1. Error when using attention-based recognition model: KeyError: 'predict'

    The inference of recognition model based on attention loss is still being debugged. For Chinese text recognition, it is recommended to choose the recognition model based on CTC loss first. In practice, it is also found that the recognition model based on attention loss is not as effective as the one based on CTC loss.

2. About inference speed

    When there are a lot of texts in the picture, the prediction time will increase. You can use `--rec_batch_num` to set a smaller prediction batch size. The default value is 30, which can be changed to 10 or other values.

3. Service deployment and mobile deployment
tink2123's avatar
tink2123 committed
191

dyning's avatar
dyning committed
192
    It is expected that the service deployment based on Serving and the mobile deployment based on Paddle Lite will be released successively in mid-to-late June. Stay tuned for more updates.
MissPenguin's avatar
MissPenguin committed
193

dyning's avatar
dyning committed
194
4. Release time of self-developed algorithm
tink2123's avatar
tink2123 committed
195

dyning's avatar
dyning committed
196
    Baidu Self-developed algorithms such as SAST, SRN and end2end PSL will be released in June or July. Please be patient.
MissPenguin's avatar
MissPenguin committed
197

dyning's avatar
dyning committed
198
[more](./doc/doc_en/FAQ_en.md)
dyning's avatar
dyning committed
199

dyning's avatar
dyning committed
200
<a name="Community"></a>
MissPenguin's avatar
MissPenguin committed
201
## Community
dyning's avatar
dyning committed
202
Scan  the QR code below with your wechat and completing the questionnaire, you can access to offical technical exchange group.
dyning's avatar
dyning committed
203

dyning's avatar
dyning committed
204
205
206
<div align="center">
<img src="./doc/joinus.jpg"  width = "200" height = "200" />
</div>
MissPenguin's avatar
MissPenguin committed
207

dyning's avatar
dyning committed
208
<a name="LICENSE"></a>
MissPenguin's avatar
MissPenguin committed
209
## License
dyning's avatar
dyning committed
210
This project is released under <a href="https://github.com/PaddlePaddle/PaddleOCR/blob/master/LICENSE">Apache 2.0 license</a>
dyning's avatar
dyning committed
211

dyning's avatar
dyning committed
212
<a name="CONTRIBUTION"></a>
MissPenguin's avatar
MissPenguin committed
213
## Contribution
dyning's avatar
dyning committed
214
We welcome all the contributions to PaddleOCR and appreciate for your feedback very much.
tink2123's avatar
tink2123 committed
215

dyning's avatar
dyning committed
216
217
218
219
220
- Many thanks to [Khanh Tran](https://github.com/xxxpsyduck) for contributing the English documentation.
- Many thanks to [zhangxin](https://github.com/ZhangXinNan) for contributing the new visualize function、add .gitgnore and discard set PYTHONPATH manually.
- Many thanks to [lyl120117](https://github.com/lyl120117) for contributing the code for printing the network structure.
- Thanks [xiangyubo](https://github.com/xiangyubo) for contributing the handwritten Chinese OCR datasets.
- Thanks [authorfu](https://github.com/authorfu) for contributing Android demo  and [xiadeye](https://github.com/xiadeye) contributing iOS demo, respectively.
littletomatodonkey's avatar
littletomatodonkey committed
221
- Thanks [BeyondYourself](https://github.com/BeyondYourself) for contributing many great suggestions and simplifying part of the code style.