detection_en.md 6.52 KB
Newer Older
xxxpsyduck's avatar
xxxpsyduck committed
1
# TEXT DETECTION
Khanh Tran's avatar
Khanh Tran committed
2

licx's avatar
licx committed
3
This section uses the icdar2015 dataset as an example to introduce the training, evaluation, and testing of the detection model in PaddleOCR.
Khanh Tran's avatar
Khanh Tran committed
4

xxxpsyduck's avatar
xxxpsyduck committed
5
## DATA PREPARATION
Khanh Tran's avatar
Khanh Tran committed
6
7
8
The icdar2015 dataset can be obtained from [official website](https://rrc.cvc.uab.es/?ch=4&com=downloads). Registration is required for downloading.

Decompress the downloaded dataset to the working directory, assuming it is decompressed under PaddleOCR/train_data/. In addition, PaddleOCR organizes many scattered annotation files into two separate annotation files for train and test respectively, which can be downloaded by wget:
licx's avatar
licx committed
9
```shell
Khanh Tran's avatar
Khanh Tran committed
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# Under the PaddleOCR path
cd PaddleOCR/
wget -P ./train_data/  https://paddleocr.bj.bcebos.com/dataset/train_icdar2015_label.txt
wget -P ./train_data/  https://paddleocr.bj.bcebos.com/dataset/test_icdar2015_label.txt
```

After decompressing the data set and downloading the annotation file, PaddleOCR/train_data/ has two folders and two files, which are:
```
/PaddleOCR/train_data/icdar2015/text_localization/
  └─ icdar_c4_train_imgs/         Training data of icdar dataset
  └─ ch4_test_images/             Testing data of icdar dataset
  └─ train_icdar2015_label.txt    Training annotation of icdar dataset
  └─ test_icdar2015_label.txt     Test annotation of icdar dataset
```

25
The provided annotation file format is as follow, seperated by "\t":
Khanh Tran's avatar
Khanh Tran committed
26
27
```
" Image file name             Image annotation information encoded by json.dumps"
LDOUBLEV's avatar
LDOUBLEV committed
28
ch4_test_images/img_61.jpg    [{"transcription": "MASA", "points": [[310, 104], [416, 141], [418, 216], [312, 179]]}, {...}]
Khanh Tran's avatar
Khanh Tran committed
29
```
WenmuZhou's avatar
WenmuZhou committed
30
The image annotation after **json.dumps()** encoding is a list containing multiple dictionaries.
Khanh Tran's avatar
Khanh Tran committed
31

licx's avatar
licx committed
32
33
34
35
36
The `points` in the dictionary represent the coordinates (x, y) of the four points of the text box, arranged clockwise from the point at the upper left corner.

`transcription` represents the text of the current text box. **When its content is "###" it means that the text box is invalid and will be skipped during training.**

If you want to train PaddleOCR on other datasets, please build the annotation file according to the above format.
Khanh Tran's avatar
Khanh Tran committed
37
38


xxxpsyduck's avatar
xxxpsyduck committed
39
## TRAINING
Khanh Tran's avatar
Khanh Tran committed
40

41
42
First download the pretrained model. The detection model of PaddleOCR currently supports 3 backbones, namely MobileNetV3, ResNet18_vd and ResNet50_vd. You can use the model in [PaddleClas](https://github.com/PaddlePaddle/PaddleClas/tree/develop/ppcls/modeling/architectures) to replace backbone according to your needs.
And the responding download link of backbone pretrain weights can be found in [PaddleClas repo](https://github.com/PaddlePaddle/PaddleClas#mobile-series).
licx's avatar
licx committed
43
```shell
Khanh Tran's avatar
Khanh Tran committed
44
45
cd PaddleOCR/
# Download the pre-trained model of MobileNetV3
46
wget -P ./pretrain_models/ https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/MobileNetV3_large_x0_5_pretrained.pdparams
WenmuZhou's avatar
WenmuZhou committed
47
# or, download the pre-trained model of ResNet18_vd
48
wget -P ./pretrain_models/ https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet18_vd_pretrained.pdparams
WenmuZhou's avatar
WenmuZhou committed
49
# or, download the pre-trained model of ResNet50_vd
50
wget -P ./pretrain_models/ https://paddle-imagenet-models-name.bj.bcebos.com/dygraph/ResNet50_vd_ssld_pretrained.pdparams
51

Khanh Tran's avatar
Khanh Tran committed
52

licx's avatar
licx committed
53
#### START TRAINING
MissPenguin's avatar
MissPenguin committed
54
*If CPU version installed, please set the parameter `use_gpu` to `false` in the configuration.*
licx's avatar
licx committed
55
```shell
LDOUBLEV's avatar
update  
LDOUBLEV committed
56
python3 tools/train.py -c configs/det/det_mv3_db.yml
Khanh Tran's avatar
Khanh Tran committed
57
58
```

MissPenguin's avatar
MissPenguin committed
59
60
In the above instruction, use `-c` to select the training to use the `configs/det/det_db_mv3.yml` configuration file.
For a detailed explanation of the configuration file, please refer to [config](./config_en.md).
Khanh Tran's avatar
Khanh Tran committed
61

62
You can also use `-o` to change the training parameters without modifying the yml file. For example, adjust the training learning rate to 0.0001
licx's avatar
licx committed
63
```shell
LDOUBLEV's avatar
update  
LDOUBLEV committed
64
# single GPU training
LDOUBLEV's avatar
LDOUBLEV committed
65
python3 tools/train.py -c configs/det/det_mv3_db.yml -o Optimizer.base_lr=0.0001
LDOUBLEV's avatar
update  
LDOUBLEV committed
66
67

# multi-GPU training
68
# Set the GPU ID used by the '--gpus' parameter.
LDOUBLEV's avatar
LDOUBLEV committed
69
python3 -m paddle.distributed.launch --gpus '0,1,2,3'  tools/train.py -c configs/det/det_mv3_db.yml -o Optimizer.base_lr=0.0001
WenmuZhou's avatar
WenmuZhou committed
70
```
LDOUBLEV's avatar
LDOUBLEV committed
71

WenmuZhou's avatar
WenmuZhou committed
72
73
74
75
If you train the PSE algorithm, you need to compile the post-processing first.
```bash
cd ppocr/postprocess/pse_postprocess/pse
python3 setup.py build_ext --inplace
Khanh Tran's avatar
Khanh Tran committed
76
77
```

WenmuZhou's avatar
WenmuZhou committed
78
#### load trained model and continue training
79
If you expect to load trained model and continue the training again, you can specify the parameter `Global.checkpoints` as the model path to be loaded.
LDOUBLEV's avatar
LDOUBLEV committed
80
81

For example:
licx's avatar
licx committed
82
```shell
LDOUBLEV's avatar
LDOUBLEV committed
83
python3 tools/train.py -c configs/det/det_mv3_db.yml -o Global.checkpoints=./your/trained/model
LDOUBLEV's avatar
LDOUBLEV committed
84
85
```

licx's avatar
licx committed
86
**Note**: The priority of `Global.checkpoints` is higher than that of `Global.pretrain_weights`, that is, when two parameters are specified at the same time, the model specified by `Global.checkpoints` will be loaded first. If the model path specified by `Global.checkpoints` is wrong, the one specified by `Global.pretrain_weights` will be loaded.
LDOUBLEV's avatar
LDOUBLEV committed
87
88


xxxpsyduck's avatar
xxxpsyduck committed
89
## EVALUATION
Khanh Tran's avatar
Khanh Tran committed
90
91
92

PaddleOCR calculates three indicators for evaluating performance of OCR detection task: Precision, Recall, and Hmean.

LDOUBLEV's avatar
LDOUBLEV committed
93
Run the following code to calculate the evaluation indicators. The result will be saved in the test result file specified by `save_res_path` in the configuration file `det_db_mv3.yml`
Khanh Tran's avatar
Khanh Tran committed
94

95
When evaluating, set post-processing parameters `box_thresh=0.6`, `unclip_ratio=1.5`. If you use different datasets, different models for training, these two parameters should be adjusted for better result.
Khanh Tran's avatar
Khanh Tran committed
96

LDOUBLEV's avatar
LDOUBLEV committed
97
The model parameters during training are saved in the `Global.save_model_dir` directory by default. When evaluating indicators, you need to set `Global.checkpoints` to point to the saved parameter file.
licx's avatar
licx committed
98
```shell
LDOUBLEV's avatar
LDOUBLEV committed
99
python3 tools/eval.py -c configs/det/det_mv3_db.yml  -o Global.checkpoints="{path/to/weights}/best_accuracy" PostProcess.box_thresh=0.6 PostProcess.unclip_ratio=1.5
Khanh Tran's avatar
Khanh Tran committed
100
101
102
```


103
* Note: `box_thresh` and `unclip_ratio` are parameters required for DB post-processing, and not need to be set when evaluating the EAST model.
Khanh Tran's avatar
Khanh Tran committed
104

105
## TEST
Khanh Tran's avatar
Khanh Tran committed
106
107

Test the detection result on a single image:
108
```shell
109
python3 tools/infer_det.py -c configs/det/det_mv3_db.yml -o Global.infer_img="./doc/imgs_en/img_10.jpg" Global.pretrained_model="./output/det_db/best_accuracy"
Khanh Tran's avatar
Khanh Tran committed
110
111
112
```

When testing the DB model, adjust the post-processing threshold:
113
```shell
114
python3 tools/infer_det.py -c configs/det/det_mv3_db.yml -o Global.infer_img="./doc/imgs_en/img_10.jpg" Global.pretrained_model="./output/det_db/best_accuracy"  PostProcess.box_thresh=0.6 PostProcess.unclip_ratio=1.5
Khanh Tran's avatar
Khanh Tran committed
115
116
117
118
```


Test the detection result on all images in the folder:
119
```shell
120
python3 tools/infer_det.py -c configs/det/det_mv3_db.yml -o Global.infer_img="./doc/imgs_en/" Global.pretrained_model="./output/det_db/best_accuracy"
Khanh Tran's avatar
Khanh Tran committed
121
```