Commit 8b077370 authored by LDOUBLEV's avatar LDOUBLEV
Browse files

Merge branch 'dygraph' of https://github.com/PaddlePaddle/PaddleOCR into dygraph

parents fea08acc 86090243
...@@ -94,14 +94,14 @@ The current open source models, data sets and magnitudes are as follows: ...@@ -94,14 +94,14 @@ The current open source models, data sets and magnitudes are as follows:
- Chinese data set, LSVT street view data set crops the image according to the truth value, and performs position calibration, a total of 30w images. In addition, based on the LSVT corpus, 500w of synthesized data. - Chinese data set, LSVT street view data set crops the image according to the truth value, and performs position calibration, a total of 30w images. In addition, based on the LSVT corpus, 500w of synthesized data.
- Small language data set, using different corpora and fonts, respectively generated 100w synthetic data set, and using ICDAR-MLT as the verification set. - Small language data set, using different corpora and fonts, respectively generated 100w synthetic data set, and using ICDAR-MLT as the verification set.
Among them, the public data sets are all open source, users can search and download by themselves, or refer to [Chinese data set](../doc_ch/datasets.md), synthetic data is not open source, users can use open source synthesis tools to synthesize by themselves. Synthesis tools include [text_renderer](https://github.com/Sanster/text_renderer), [SynthText](https://github.com/ankush-me/SynthText), [TextRecognitionDataGenerator](https://github.com/Belval/TextRecognitionDataGenerator) etc. Among them, the public data sets are all open source, users can search and download by themselves, or refer to [Chinese data set](./datasets_en.md), synthetic data is not open source, users can use open source synthesis tools to synthesize by themselves. Synthesis tools include [text_renderer](https://github.com/Sanster/text_renderer), [SynthText](https://github.com/ankush-me/SynthText), [TextRecognitionDataGenerator](https://github.com/Belval/TextRecognitionDataGenerator) etc.
<a name="22-vertical-scene"></a> <a name="22-vertical-scene"></a>
### 3.2 Vertical Scene ### 3.2 Vertical Scene
PaddleOCR mainly focuses on general OCR. If you have vertical requirements, you can use PaddleOCR + vertical data to train yourself; PaddleOCR mainly focuses on general OCR. If you have vertical requirements, you can use PaddleOCR + vertical data to train yourself;
If there is a lack of labeled data, or if you do not want to invest in research and development costs, it is recommended to directly call the open API, which covers some of the more common vertical categories. If there is a lack of labeled data, or if you do not want to invest in research and development costs, it is recommended to directly call the open API, which covers some of the more common vertical categories.
<a name="23-build-your-own-data-set"></a> <a name="23-build-your-own-data-set"></a>
...@@ -147,8 +147,8 @@ There are several experiences for reference when constructing the data set: ...@@ -147,8 +147,8 @@ There are several experiences for reference when constructing the data set:
*** ***
Click the following links for detailed training tutorial: Click the following links for detailed training tutorial:
- [text detection model training](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.3/doc/doc_ch/detection.md) - [text detection model training](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.3/doc/doc_ch/detection.md)
- [text recognition model training](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.3/doc/doc_ch/recognition.md) - [text recognition model training](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.3/doc/doc_ch/recognition.md)
- [text direction classification model training](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.3/doc/doc_ch/angle_class.md) - [text direction classification model training](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.3/doc/doc_ch/angle_class.md)
...@@ -12,25 +12,25 @@ Here we have sorted out some Chinese OCR training and prediction tricks, which a ...@@ -12,25 +12,25 @@ Here we have sorted out some Chinese OCR training and prediction tricks, which a
At present, ResNet_vd series and MobileNetV3 series are the backbone networks used in PaddleOCR, whether replacing the other backbone networks will help to improve the accuracy? What should be paid attention to when replacing? At present, ResNet_vd series and MobileNetV3 series are the backbone networks used in PaddleOCR, whether replacing the other backbone networks will help to improve the accuracy? What should be paid attention to when replacing?
- **Tips** - **Tips**
- Whether text detection or text recognition, the choice of backbone network is a trade-off between prediction effect and prediction efficiency. Generally, a larger backbone network is selected, e.g. ResNet101_vd, then the performance of the detection or recognition is more accurate, but the time cost will increase accordingly. And a smaller backbone network is selected, e.g. MobileNetV3_small_x0_35, the prediction speed is faster, but the accuracy of detection or recognition will be reduced. Fortunately, the detection or recognition effect of different backbone networks is positively correlated with the performance of ImageNet 1000 classification task. [**PaddleClas**](https://github.com/PaddlePaddle/PaddleClas/blob/master/README_en.md) have sorted out the 23 series of classification network structures, such as ResNet_vd、Res2Net、HRNet、MobileNetV3、GhostNet. It provides the top1 accuracy of classification, the time cost of GPU(V100 and T4) and CPU(SD 855), and the 117 pretrained models [**download addresses**](https://paddleclas-en.readthedocs.io/en/latest/models/models_intro_en.html). - Whether text detection or text recognition, the choice of backbone network is a trade-off between prediction effect and prediction efficiency. Generally, a larger backbone network is selected, e.g. ResNet101_vd, then the performance of the detection or recognition is more accurate, but the time cost will increase accordingly. And a smaller backbone network is selected, e.g. MobileNetV3_small_x0_35, the prediction speed is faster, but the accuracy of detection or recognition will be reduced. Fortunately, the detection or recognition effect of different backbone networks is positively correlated with the performance of ImageNet 1000 classification task. [**PaddleClas**](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.3/docs/en/models/models_intro_en.md) have sorted out the 23 series of classification network structures, such as ResNet_vd、Res2Net、HRNet、MobileNetV3、GhostNet. It provides the top1 accuracy of classification, the time cost of GPU(V100 and T4) and CPU(SD 855), and the 117 pretrained models [**download addresses**](https://paddleclas-en.readthedocs.io/en/latest/models/models_intro_en.html).
- Similar as the 4 stages of ResNet, the replacement of text detection backbone network is to determine those four stages to facilitate the integration of FPN like the object detection heads. In addition, for the text detection problem, the pre trained model in ImageNet1000 can accelerate the convergence and improve the accuracy. - Similar as the 4 stages of ResNet, the replacement of text detection backbone network is to determine those four stages to facilitate the integration of FPN like the object detection heads. In addition, for the text detection problem, the pre trained model in ImageNet1000 can accelerate the convergence and improve the accuracy.
- In order to replace the backbone network of text recognition, we need to pay attention to the descending position of network width and height stride. Since the ratio between width and height is large in chinese text recognition, the frequency of height decrease is less and the frequency of width decrease is more. You can refer the [modifies of MobileNetV3](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/ppocr/modeling/backbones/rec_mobilenet_v3.py) in PaddleOCR. - In order to replace the backbone network of text recognition, we need to pay attention to the descending position of network width and height stride. Since the ratio between width and height is large in chinese text recognition, the frequency of height decrease is less and the frequency of width decrease is more. You can refer the [modifies of MobileNetV3](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/ppocr/modeling/backbones/rec_mobilenet_v3.py) in PaddleOCR.
<a name="LongChineseTextRecognition"></a> <a name="LongChineseTextRecognition"></a>
#### 2、Long Chinese Text Recognition #### 2、Long Chinese Text Recognition
- **Problem Description** - **Problem Description**
The maximum resolution of Chinese recognition model during training is [3,32,320], if the text image to be recognized is too long, as shown in the figure below, how to adapt? The maximum resolution of Chinese recognition model during training is [3,32,320], if the text image to be recognized is too long, as shown in the figure below, how to adapt?
<div align="center"> <div align="center">
<img src="../tricks/long_text_examples.jpg" width="600"> <img src="../tricks/long_text_examples.jpg" width="600">
</div> </div>
- **Tips** - **Tips**
During the training, the training samples are not directly resized to [3,32,320]. At first, the height of samples are resized to 32 and keep the ratio between the width and the height. When the width is less than 320, the excess parts are padding 0. Besides, when the ratio between the width and the height of the samples is larger than 10, these samples will be ignored. When the prediction for one image, do as above, but do not limit the max ratio between the width and the height. When the prediction for an images batch, do as training, but the resized target width is the longest width of the images in the batch. [Code as following](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/tools/infer/predict_rec.py) During the training, the training samples are not directly resized to [3,32,320]. At first, the height of samples are resized to 32 and keep the ratio between the width and the height. When the width is less than 320, the excess parts are padding 0. Besides, when the ratio between the width and the height of the samples is larger than 10, these samples will be ignored. When the prediction for one image, do as above, but do not limit the max ratio between the width and the height. When the prediction for an images batch, do as training, but the resized target width is the longest width of the images in the batch. [Code as following](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/tools/infer/predict_rec.py)
``` ```
def resize_norm_img(self, img, max_wh_ratio): def resize_norm_img(self, img, max_wh_ratio):
imgC, imgH, imgW = self.rec_image_shape imgC, imgH, imgW = self.rec_image_shape
...@@ -58,11 +58,11 @@ Here we have sorted out some Chinese OCR training and prediction tricks, which a ...@@ -58,11 +58,11 @@ Here we have sorted out some Chinese OCR training and prediction tricks, which a
- **Problem Description** - **Problem Description**
As shown in the figure below, for Chinese and English mixed scenes, in order to facilitate reading and using the recognition results, it is often necessary to recognize the spaces between words. How can this situation be adapted? As shown in the figure below, for Chinese and English mixed scenes, in order to facilitate reading and using the recognition results, it is often necessary to recognize the spaces between words. How can this situation be adapted?
<div align="center"> <div align="center">
<img src="../imgs_results/chinese_db_crnn_server/en_paper.jpg" width="600"> <img src="../imgs_results/chinese_db_crnn_server/en_paper.jpg" width="600">
</div> </div>
- **Tips** - **Tips**
There are two possible methods for space recognition. (1) Optimize the text detection. For spliting the text at the space in detection results, it needs to divide the text line with space into many segments when label the data for detection. (2) Optimize the text recognition. The space character is introduced into the recognition dictionary. Label the blank line in the training data for text recognition. In addition, we can also concat multiple word lines to synthesize the training data with spaces. PaddleOCR currently uses the second method. There are two possible methods for space recognition. (1) Optimize the text detection. For spliting the text at the space in detection results, it needs to divide the text line with space into many segments when label the data for detection. (2) Optimize the text recognition. The space character is introduced into the recognition dictionary. Label the blank line in the training data for text recognition. In addition, we can also concat multiple word lines to synthesize the training data with spaces. PaddleOCR currently uses the second method.
doc/joinus.PNG

199 KB | W: | H:

doc/joinus.PNG

198 KB | W: | H:

doc/joinus.PNG
doc/joinus.PNG
doc/joinus.PNG
doc/joinus.PNG
  • 2-up
  • Swipe
  • Onion skin
...@@ -799,7 +799,7 @@ class VQATokenLabelEncode(object): ...@@ -799,7 +799,7 @@ class VQATokenLabelEncode(object):
ocr_engine=None, ocr_engine=None,
**kwargs): **kwargs):
super(VQATokenLabelEncode, self).__init__() super(VQATokenLabelEncode, self).__init__()
from paddlenlp.transformers import LayoutXLMTokenizer, LayoutLMTokenizer from paddlenlp.transformers import LayoutXLMTokenizer, LayoutLMTokenizer, LayoutLMv2Tokenizer
from ppocr.utils.utility import load_vqa_bio_label_maps from ppocr.utils.utility import load_vqa_bio_label_maps
tokenizer_dict = { tokenizer_dict = {
'LayoutXLM': { 'LayoutXLM': {
...@@ -809,6 +809,10 @@ class VQATokenLabelEncode(object): ...@@ -809,6 +809,10 @@ class VQATokenLabelEncode(object):
'LayoutLM': { 'LayoutLM': {
'class': LayoutLMTokenizer, 'class': LayoutLMTokenizer,
'pretrained_model': 'layoutlm-base-uncased' 'pretrained_model': 'layoutlm-base-uncased'
},
'LayoutLMv2': {
'class': LayoutLMv2Tokenizer,
'pretrained_model': 'layoutlmv2-base-uncased'
} }
} }
self.contains_re = contains_re self.contains_re = contains_re
......
...@@ -12,6 +12,8 @@ ...@@ -12,6 +12,8 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from collections import defaultdict
class VQASerTokenChunk(object): class VQASerTokenChunk(object):
def __init__(self, max_seq_len=512, infer_mode=False, **kwargs): def __init__(self, max_seq_len=512, infer_mode=False, **kwargs):
...@@ -39,6 +41,8 @@ class VQASerTokenChunk(object): ...@@ -39,6 +41,8 @@ class VQASerTokenChunk(object):
encoded_inputs_example[key] = data[key] encoded_inputs_example[key] = data[key]
encoded_inputs_all.append(encoded_inputs_example) encoded_inputs_all.append(encoded_inputs_example)
if len(encoded_inputs_all) == 0:
return None
return encoded_inputs_all[0] return encoded_inputs_all[0]
...@@ -101,17 +105,18 @@ class VQAReTokenChunk(object): ...@@ -101,17 +105,18 @@ class VQAReTokenChunk(object):
"entities": self.reformat(entities_in_this_span), "entities": self.reformat(entities_in_this_span),
"relations": self.reformat(relations_in_this_span), "relations": self.reformat(relations_in_this_span),
}) })
item['entities']['label'] = [ if len(item['entities']) > 0:
self.entities_labels[x] for x in item['entities']['label'] item['entities']['label'] = [
] self.entities_labels[x] for x in item['entities']['label']
encoded_inputs_all.append(item) ]
encoded_inputs_all.append(item)
if len(encoded_inputs_all) == 0:
return None
return encoded_inputs_all[0] return encoded_inputs_all[0]
def reformat(self, data): def reformat(self, data):
new_data = {} new_data = defaultdict(list)
for item in data: for item in data:
for k, v in item.items(): for k, v in item.items():
if k not in new_data:
new_data[k] = []
new_data[k].append(v) new_data[k].append(v)
return new_data return new_data
...@@ -45,8 +45,11 @@ def build_backbone(config, model_type): ...@@ -45,8 +45,11 @@ def build_backbone(config, model_type):
from .table_mobilenet_v3 import MobileNetV3 from .table_mobilenet_v3 import MobileNetV3
support_dict = ["ResNet", "MobileNetV3"] support_dict = ["ResNet", "MobileNetV3"]
elif model_type == 'vqa': elif model_type == 'vqa':
from .vqa_layoutlm import LayoutLMForSer, LayoutXLMForSer, LayoutXLMForRe from .vqa_layoutlm import LayoutLMForSer, LayoutLMv2ForSer, LayoutLMv2ForRe, LayoutXLMForSer, LayoutXLMForRe
support_dict = ["LayoutLMForSer", "LayoutXLMForSer", 'LayoutXLMForRe'] support_dict = [
"LayoutLMForSer", "LayoutLMv2ForSer", 'LayoutLMv2ForRe',
"LayoutXLMForSer", 'LayoutXLMForRe'
]
else: else:
raise NotImplementedError raise NotImplementedError
......
...@@ -21,12 +21,14 @@ from paddle import nn ...@@ -21,12 +21,14 @@ from paddle import nn
from paddlenlp.transformers import LayoutXLMModel, LayoutXLMForTokenClassification, LayoutXLMForRelationExtraction from paddlenlp.transformers import LayoutXLMModel, LayoutXLMForTokenClassification, LayoutXLMForRelationExtraction
from paddlenlp.transformers import LayoutLMModel, LayoutLMForTokenClassification from paddlenlp.transformers import LayoutLMModel, LayoutLMForTokenClassification
from paddlenlp.transformers import LayoutLMv2Model, LayoutLMv2ForTokenClassification, LayoutLMv2ForRelationExtraction
__all__ = ["LayoutXLMForSer", 'LayoutLMForSer'] __all__ = ["LayoutXLMForSer", 'LayoutLMForSer']
pretrained_model_dict = { pretrained_model_dict = {
LayoutXLMModel: 'layoutxlm-base-uncased', LayoutXLMModel: 'layoutxlm-base-uncased',
LayoutLMModel: 'layoutlm-base-uncased' LayoutLMModel: 'layoutlm-base-uncased',
LayoutLMv2Model: 'layoutlmv2-base-uncased'
} }
...@@ -58,12 +60,34 @@ class NLPBaseModel(nn.Layer): ...@@ -58,12 +60,34 @@ class NLPBaseModel(nn.Layer):
self.out_channels = 1 self.out_channels = 1
class LayoutXLMForSer(NLPBaseModel): class LayoutLMForSer(NLPBaseModel):
def __init__(self, num_classes, pretrained=True, checkpoints=None, def __init__(self, num_classes, pretrained=True, checkpoints=None,
**kwargs): **kwargs):
super(LayoutXLMForSer, self).__init__( super(LayoutLMForSer, self).__init__(
LayoutXLMModel, LayoutLMModel,
LayoutXLMForTokenClassification, LayoutLMForTokenClassification,
'ser',
pretrained,
checkpoints,
num_classes=num_classes)
def forward(self, x):
x = self.model(
input_ids=x[0],
bbox=x[2],
attention_mask=x[4],
token_type_ids=x[5],
position_ids=None,
output_hidden_states=False)
return x
class LayoutLMv2ForSer(NLPBaseModel):
def __init__(self, num_classes, pretrained=True, checkpoints=None,
**kwargs):
super(LayoutLMv2ForSer, self).__init__(
LayoutLMv2Model,
LayoutLMv2ForTokenClassification,
'ser', 'ser',
pretrained, pretrained,
checkpoints, checkpoints,
...@@ -82,12 +106,12 @@ class LayoutXLMForSer(NLPBaseModel): ...@@ -82,12 +106,12 @@ class LayoutXLMForSer(NLPBaseModel):
return x[0] return x[0]
class LayoutLMForSer(NLPBaseModel): class LayoutXLMForSer(NLPBaseModel):
def __init__(self, num_classes, pretrained=True, checkpoints=None, def __init__(self, num_classes, pretrained=True, checkpoints=None,
**kwargs): **kwargs):
super(LayoutLMForSer, self).__init__( super(LayoutXLMForSer, self).__init__(
LayoutLMModel, LayoutXLMModel,
LayoutLMForTokenClassification, LayoutXLMForTokenClassification,
'ser', 'ser',
pretrained, pretrained,
checkpoints, checkpoints,
...@@ -97,10 +121,33 @@ class LayoutLMForSer(NLPBaseModel): ...@@ -97,10 +121,33 @@ class LayoutLMForSer(NLPBaseModel):
x = self.model( x = self.model(
input_ids=x[0], input_ids=x[0],
bbox=x[2], bbox=x[2],
image=x[3],
attention_mask=x[4], attention_mask=x[4],
token_type_ids=x[5], token_type_ids=x[5],
position_ids=None, position_ids=None,
output_hidden_states=False) head_mask=None,
labels=None)
return x[0]
class LayoutLMv2ForRe(NLPBaseModel):
def __init__(self, pretrained=True, checkpoints=None, **kwargs):
super(LayoutLMv2ForRe, self).__init__(LayoutLMv2Model,
LayoutLMv2ForRelationExtraction,
're', pretrained, checkpoints)
def forward(self, x):
x = self.model(
input_ids=x[0],
bbox=x[1],
labels=None,
image=x[2],
attention_mask=x[3],
token_type_ids=x[4],
position_ids=None,
head_mask=None,
entities=x[5],
relations=x[6])
return x return x
......
...@@ -25,11 +25,8 @@ __all__ = ['build_optimizer'] ...@@ -25,11 +25,8 @@ __all__ = ['build_optimizer']
def build_lr_scheduler(lr_config, epochs, step_each_epoch): def build_lr_scheduler(lr_config, epochs, step_each_epoch):
from . import learning_rate from . import learning_rate
lr_config.update({'epochs': epochs, 'step_each_epoch': step_each_epoch}) lr_config.update({'epochs': epochs, 'step_each_epoch': step_each_epoch})
if 'name' in lr_config: lr_name = lr_config.pop('name', 'Const')
lr_name = lr_config.pop('name') lr = getattr(learning_rate, lr_name)(**lr_config)()
lr = getattr(learning_rate, lr_name)(**lr_config)()
else:
lr = lr_config['learning_rate']
return lr return lr
......
...@@ -275,4 +275,36 @@ class OneCycle(object): ...@@ -275,4 +275,36 @@ class OneCycle(object):
start_lr=0.0, start_lr=0.0,
end_lr=self.max_lr, end_lr=self.max_lr,
last_epoch=self.last_epoch) last_epoch=self.last_epoch)
return learning_rate return learning_rate
\ No newline at end of file
class Const(object):
"""
Const learning rate decay
Args:
learning_rate(float): initial learning rate
step_each_epoch(int): steps each epoch
last_epoch (int, optional): The index of last epoch. Can be set to restart training. Default: -1, means initial learning rate.
"""
def __init__(self,
learning_rate,
step_each_epoch,
warmup_epoch=0,
last_epoch=-1,
**kwargs):
super(Const, self).__init__()
self.learning_rate = learning_rate
self.last_epoch = last_epoch
self.warmup_epoch = round(warmup_epoch * step_each_epoch)
def __call__(self):
learning_rate = self.learning_rate
if self.warmup_epoch > 0:
learning_rate = lr.LinearWarmup(
learning_rate=learning_rate,
warmup_steps=self.warmup_epoch,
start_lr=0.0,
end_lr=self.learning_rate,
last_epoch=self.last_epoch)
return learning_rate
English | [简体中文](README_ch.md) English | [简体中文](README_ch.md)
- [1. Introduction](#1) - [1. Introduction](#1-introduction)
- [2. Update log](#2) - [2. Update log](#2-update-log)
- [3. Features](#3) - [3. Features](#3-features)
- [4. Results](#4) - [4. Results](#4-results)
* [4.1 Layout analysis and table recognition](#41) - [4.1 Layout analysis and table recognition](#41-layout-analysis-and-table-recognition)
* [4.2 DOC-VQA](#42) - [4.2 DOC-VQA](#42-doc-vqa)
- [5. Quick start](#5) - [5. Quick start](#5-quick-start)
- [6. PP-Structure System](#6) - [6. PP-Structure System](#6-pp-structure-system)
* [6.1 Layout analysis and table recognition](#61) - [6.1 Layout analysis and table recognition](#61-layout-analysis-and-table-recognition)
* [6.2 DOC-VQA](#62) - [6.1.1 Layout analysis](#611-layout-analysis)
- [7. Model List](#7) - [6.1.2 Table recognition](#612-table-recognition)
- [6.2 DOC-VQA](#62-doc-vqa)
<a name="1"></a> - [7. Model List](#7-model-list)
- [7.1 Layout analysis model](#71-layout-analysis-model)
- [7.2 OCR and table recognition model](#72-ocr-and-table-recognition-model)
- [7.3 DOC-VQA model](#73-doc-vqa-model)
## 1. Introduction ## 1. Introduction
PP-Structure is an OCR toolkit that can be used for document analysis and processing with complex structures, designed to help developers better complete document understanding tasks PP-Structure is an OCR toolkit that can be used for document analysis and processing with complex structures, designed to help developers better complete document understanding tasks
<a name="2"></a>
## 2. Update log ## 2. Update log
* 2022.02.12 DOC-VQA add LayoutLMv2 model。
* 2021.12.07 add [DOC-VQA SER and RE tasks](vqa/README.md) * 2021.12.07 add [DOC-VQA SER and RE tasks](vqa/README.md)
<a name="3"></a>
## 3. Features ## 3. Features
The main features of PP-Structure are as follows: The main features of PP-Structure are as follows:
...@@ -36,26 +36,19 @@ The main features of PP-Structure are as follows: ...@@ -36,26 +36,19 @@ The main features of PP-Structure are as follows:
- Support custom training for layout analysis and table structure tasks - Support custom training for layout analysis and table structure tasks
- Support Document Visual Question Answering (DOC-VQA) tasks: Semantic Entity Recognition (SER) and Relation Extraction (RE) - Support Document Visual Question Answering (DOC-VQA) tasks: Semantic Entity Recognition (SER) and Relation Extraction (RE)
<a name="4"></a>
## 4. Results ## 4. Results
<a name="41"></a>
### 4.1 Layout analysis and table recognition ### 4.1 Layout analysis and table recognition
<img src="../doc/table/ppstructure.GIF" width="100%"/> <img src="../doc/table/ppstructure.GIF" width="100%"/>
The figure shows the pipeline of layout analysis + table recognition. The image is first divided into four areas of image, text, title and table by layout analysis, and then OCR detection and recognition is performed on the three areas of image, text and title, and the table is performed table recognition, where the image will also be stored for use. The figure shows the pipeline of layout analysis + table recognition. The image is first divided into four areas of image, text, title and table by layout analysis, and then OCR detection and recognition is performed on the three areas of image, text and title, and the table is performed table recognition, where the image will also be stored for use.
<a name="42"></a>
### 4.2 DOC-VQA ### 4.2 DOC-VQA
* SER * SER
*
![](./vqa/images/result_ser/zh_val_0_ser.jpg) | ![](./vqa/images/result_ser/zh_val_42_ser.jpg) ![](../doc/vqa/result_ser/zh_val_0_ser.jpg) | ![](../doc/vqa/result_ser/zh_val_42_ser.jpg)
---|--- ---|---
Different colored boxes in the figure represent different categories. For xfun dataset, there are three categories: query, answer and header: Different colored boxes in the figure represent different categories. For xfun dataset, there are three categories: query, answer and header:
...@@ -69,25 +62,18 @@ The corresponding category and OCR recognition results are also marked at the to ...@@ -69,25 +62,18 @@ The corresponding category and OCR recognition results are also marked at the to
* RE * RE
![](./vqa/images/result_re/zh_val_21_re.jpg) | ![](./vqa/images/result_re/zh_val_40_re.jpg) ![](../doc/vqa/result_re/zh_val_21_re.jpg) | ![](../doc/vqa/result_re/zh_val_40_re.jpg)
---|--- ---|---
In the figure, the red box represents the question, the blue box represents the answer, and the question and answer are connected by green lines. The corresponding category and OCR recognition results are also marked at the top left of the OCR detection box. In the figure, the red box represents the question, the blue box represents the answer, and the question and answer are connected by green lines. The corresponding category and OCR recognition results are also marked at the top left of the OCR detection box.
<a name="5"></a>
## 5. Quick start ## 5. Quick start
Start from [Quick Installation](./docs/quickstart.md) Start from [Quick Installation](./docs/quickstart.md)
<a name="6"></a>
## 6. PP-Structure System ## 6. PP-Structure System
<a name="61"></a>
### 6.1 Layout analysis and table recognition ### 6.1 Layout analysis and table recognition
![pipeline](../doc/table/pipeline.jpg) ![pipeline](../doc/table/pipeline.jpg)
...@@ -96,45 +82,39 @@ In PP-Structure, the image will be divided into 5 types of areas **text, title, ...@@ -96,45 +82,39 @@ In PP-Structure, the image will be divided into 5 types of areas **text, title,
#### 6.1.1 Layout analysis #### 6.1.1 Layout analysis
Layout analysis classifies image by region, including the use of Python scripts of layout analysis tools, extraction of designated category detection boxes, performance indicators, and custom training layout analysis models. For details, please refer to [document](layout/README_en.md). Layout analysis classifies image by region, including the use of Python scripts of layout analysis tools, extraction of designated category detection boxes, performance indicators, and custom training layout analysis models. For details, please refer to [document](layout/README.md).
#### 6.1.2 Table recognition #### 6.1.2 Table recognition
Table recognition converts table images into excel documents, which include the detection and recognition of table text and the prediction of table structure and cell coordinates. For detailed instructions, please refer to [document](table/README.md) Table recognition converts table images into excel documents, which include the detection and recognition of table text and the prediction of table structure and cell coordinates. For detailed instructions, please refer to [document](table/README.md)
<a name="62"></a>
### 6.2 DOC-VQA ### 6.2 DOC-VQA
Document Visual Question Answering (DOC-VQA) if a type of Visual Question Answering (VQA), which includes Semantic Entity Recognition (SER) and Relation Extraction (RE) tasks. Based on SER task, text recognition and classification in images can be completed. Based on THE RE task, we can extract the relation of the text content in the image, such as judge the problem pair. For details, please refer to [document](vqa/README.md) Document Visual Question Answering (DOC-VQA) if a type of Visual Question Answering (VQA), which includes Semantic Entity Recognition (SER) and Relation Extraction (RE) tasks. Based on SER task, text recognition and classification in images can be completed. Based on THE RE task, we can extract the relation of the text content in the image, such as judge the problem pair. For details, please refer to [document](vqa/README.md)
<a name="7"></a>
## 7. Model List ## 7. Model List
PP-Structure系列模型列表(更新中) PP-Structure Series Model List (Updating)
* Layout analysis model ### 7.1 Layout analysis model
|model name|description|download| |model name|description|download|
| --- | --- | --- | | --- | --- | --- |
| ppyolov2_r50vd_dcn_365e_publaynet | The layout analysis model trained on the PubLayNet dataset can divide image into 5 types of areas **text, title, table, picture, and list** | [PubLayNet](https://paddle-model-ecology.bj.bcebos.com/model/layout-parser/ppyolov2_r50vd_dcn_365e_publaynet.tar) | | ppyolov2_r50vd_dcn_365e_publaynet | The layout analysis model trained on the PubLayNet dataset can divide image into 5 types of areas **text, title, table, picture, and list** | [PubLayNet](https://paddle-model-ecology.bj.bcebos.com/model/layout-parser/ppyolov2_r50vd_dcn_365e_publaynet.tar) |
### 7.2 OCR and table recognition model
* OCR and table recognition model
|model name|description|model size|download| |model name|description|model size|download|
| --- | --- | --- | --- | | --- | --- | --- | --- |
|ch_ppocr_mobile_slim_v2.0_det|Slim pruned lightweight model, supporting Chinese, English, multilingual text detection|2.6M|[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/slim/ch_ppocr_mobile_v2.0_det_prune_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/slim/ch_ppocr_mobile_v2.0_det_prune_infer.tar) | |ch_PP-OCRv2_det_slim|[New] Slim quantization with distillation lightweight model, supporting Chinese, English, multilingual text detection| 3M |[inference model](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_slim_quant_infer.tar)|
|ch_ppocr_mobile_slim_v2.0_rec|Slim pruned and quantized lightweight model, supporting Chinese, English and number recognition|6M|[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_slim_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_slim_train.tar) | |ch_PP-OCRv2_rec_slim|[New] Slim qunatization with distillation lightweight model, supporting Chinese, English, multilingual text recognition| 9M |[inference model](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_slim_quant_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_slim_quant_train.tar) |
|en_ppocr_mobile_v2.0_table_structure|Table structure prediction of English table scene trained on PubLayNet dataset|[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_structure_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.1/table/en_ppocr_mobile_v2.0_table_structure_train.tar) | |en_ppocr_mobile_v2.0_table_structure|Table structure prediction of English table scene trained on PubLayNet dataset| 18.6M |[inference model](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_structure_infer.tar) / [trained model](https://paddleocr.bj.bcebos.com/dygraph_v2.1/table/en_ppocr_mobile_v2.0_table_structure_train.tar) |
* DOC-VQA model ### 7.3 DOC-VQA model
|model name|description|model size|download| |model name|description|model size|download|
| --- | --- | --- | --- | | --- | --- | --- | --- |
|PP-Layout_v1.0_ser_pretrained|SER model trained on xfun Chinese dataset based on LayoutXLM|1.4G|[inference model coming soon]() / [trained model](https://paddleocr.bj.bcebos.com/pplayout/PP-Layout_v1.0_ser_pretrained.tar) | |ser_LayoutXLM_xfun_zhd|SER model trained on xfun Chinese dataset based on LayoutXLM|1.4G|[inference model coming soon]() / [trained model](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutXLM_xfun_zh.tar) |
|PP-Layout_v1.0_re_pretrained|RE model trained on xfun Chinese dataset based on LayoutXLM|1.4G|[inference model coming soon]() / [trained model](https://paddleocr.bj.bcebos.com/pplayout/PP-Layout_v1.0_re_pretrained.tar) | |re_LayoutXLM_xfun_zh|RE model trained on xfun Chinese dataset based on LayoutXLM|1.4G|[inference model coming soon]() / [trained model](https://paddleocr.bj.bcebos.com/pplayout/re_LayoutXLM_xfun_zh.tar) |
If you need to use other models, you can download the model in [PPOCR model_list](../doc/doc_en/models_list_en.md) and [PPStructure model_list](./docs/model_list.md) If you need to use other models, you can download the model in [PPOCR model_list](../doc/doc_en/models_list_en.md) and [PPStructure model_list](./docs/models_list.md)
[English](README.md) | 简体中文 [English](README.md) | 简体中文
- [1. 简介](#1) - [1. 简介](#1-简介)
- [2. 近期更新](#2) - [2. 近期更新](#2-近期更新)
- [3. 特性](#3) - [3. 特性](#3-特性)
- [4. 效果展示](#4) - [4. 效果展示](#4-效果展示)
* [4.1 版面分析和表格识别](#41) - [4.1 版面分析和表格识别](#41-版面分析和表格识别)
* [4.2 DOC-VQA](#42) - [4.2 DOC-VQA](#42-doc-vqa)
- [5. 快速体验](#5) - [5. 快速体验](#5-快速体验)
- [6. PP-Structure 介绍](#6) - [6. PP-Structure 介绍](#6-pp-structure-介绍)
* [6.1 版面分析+表格识别](#61) - [6.1 版面分析+表格识别](#61-版面分析表格识别)
* [6.2 DOC-VQA](#62) - [6.1.1 版面分析](#611-版面分析)
- [7. 模型库](#7) - [6.1.2 表格识别](#612-表格识别)
- [6.2 DOC-VQA](#62-doc-vqa)
<a name="1"></a> - [7. 模型库](#7-模型库)
- [7.1 版面分析模型](#71-版面分析模型)
- [7.2 OCR和表格识别模型](#72-ocr和表格识别模型)
- [7.2 DOC-VQA 模型](#72-doc-vqa-模型)
## 1. 简介 ## 1. 简介
PP-Structure是一个可用于复杂文档结构分析和处理的OCR工具包,旨在帮助开发者更好的完成文档理解相关任务。 PP-Structure是一个可用于复杂文档结构分析和处理的OCR工具包,旨在帮助开发者更好的完成文档理解相关任务。
<a name="2"></a>
## 2. 近期更新 ## 2. 近期更新
* 2021.12.07 新增DOC-[VQA任务SER和RE](vqa/README.md) * 2022.02.12 DOC-VQA增加LayoutLMv2模型。
* 2021.12.07 新增[DOC-VQA任务SER和RE](vqa/README.md)
<a name="3"></a>
## 3. 特性 ## 3. 特性
...@@ -34,27 +35,19 @@ PP-Structure的主要特性如下: ...@@ -34,27 +35,19 @@ PP-Structure的主要特性如下:
- 支持版面分析和表格结构化两类任务自定义训练 - 支持版面分析和表格结构化两类任务自定义训练
- 支持文档视觉问答(Document Visual Question Answering,DOC-VQA)任务-语义实体识别(Semantic Entity Recognition,SER)和关系抽取(Relation Extraction,RE) - 支持文档视觉问答(Document Visual Question Answering,DOC-VQA)任务-语义实体识别(Semantic Entity Recognition,SER)和关系抽取(Relation Extraction,RE)
<a name="4"></a>
## 4. 效果展示 ## 4. 效果展示
<a name="41"></a>
### 4.1 版面分析和表格识别 ### 4.1 版面分析和表格识别
<img src="../doc/table/ppstructure.GIF" width="100%"/> <img src="../doc/table/ppstructure.GIF" width="100%"/>
图中展示了版面分析+表格识别的整体流程,图片先有版面分析划分为图像、文本、标题和表格四种区域,然后对图像、文本和标题三种区域进行OCR的检测识别,对表格进行表格识别,其中图像还会被存储下来以便使用。 图中展示了版面分析+表格识别的整体流程,图片先有版面分析划分为图像、文本、标题和表格四种区域,然后对图像、文本和标题三种区域进行OCR的检测识别,对表格进行表格识别,其中图像还会被存储下来以便使用。
<a name="42"></a>
### 4.2 DOC-VQA ### 4.2 DOC-VQA
* SER * SER
![](./vqa/images/result_ser/zh_val_0_ser.jpg) | ![](./vqa/images/result_ser/zh_val_42_ser.jpg) ![](../doc/vqa/result_ser/zh_val_0_ser.jpg) | ![](../doc/vqa/result_ser/zh_val_42_ser.jpg)
---|--- ---|---
图中不同颜色的框表示不同的类别,对于XFUN数据集,有`QUESTION`, `ANSWER`, `HEADER` 3种类别 图中不同颜色的框表示不同的类别,对于XFUN数据集,有`QUESTION`, `ANSWER`, `HEADER` 3种类别
...@@ -67,24 +60,18 @@ PP-Structure的主要特性如下: ...@@ -67,24 +60,18 @@ PP-Structure的主要特性如下:
* RE * RE
![](./vqa/images/result_re/zh_val_21_re.jpg) | ![](./vqa/images/result_re/zh_val_40_re.jpg) ![](../doc/vqa/result_re/zh_val_21_re.jpg) | ![](../doc/vqa/result_re/zh_val_40_re.jpg)
---|--- ---|---
图中红色框表示问题,蓝色框表示答案,问题和答案之间使用绿色线连接。在OCR检测框的左上方也标出了对应的类别和OCR识别结果。 图中红色框表示问题,蓝色框表示答案,问题和答案之间使用绿色线连接。在OCR检测框的左上方也标出了对应的类别和OCR识别结果。
<a name="5"></a>
## 5. 快速体验 ## 5. 快速体验
请参考[快速安装](./docs/quickstart.md)教程。 请参考[快速安装](./docs/quickstart.md)教程。
<a name="6"></a>
## 6. PP-Structure 介绍 ## 6. PP-Structure 介绍
<a name="61"></a>
### 6.1 版面分析+表格识别 ### 6.1 版面分析+表格识别
![pipeline](../doc/table/pipeline.jpg) ![pipeline](../doc/table/pipeline.jpg)
...@@ -99,39 +86,34 @@ PP-Structure的主要特性如下: ...@@ -99,39 +86,34 @@ PP-Structure的主要特性如下:
表格识别将表格图片转换为excel文档,其中包含对于表格文本的检测和识别以及对于表格结构和单元格坐标的预测,详细说明参考[文档](table/README_ch.md) 表格识别将表格图片转换为excel文档,其中包含对于表格文本的检测和识别以及对于表格结构和单元格坐标的预测,详细说明参考[文档](table/README_ch.md)
<a name="62"></a>
### 6.2 DOC-VQA ### 6.2 DOC-VQA
DOC-VQA指文档视觉问答,其中包括语义实体识别 (Semantic Entity Recognition, SER) 和关系抽取 (Relation Extraction, RE) 任务。基于 SER 任务,可以完成对图像中的文本识别与分类;基于 RE 任务,可以完成对图象中的文本内容的关系提取,如判断问题对(pair),详细说明参考[文档](vqa/README.md) DOC-VQA指文档视觉问答,其中包括语义实体识别 (Semantic Entity Recognition, SER) 和关系抽取 (Relation Extraction, RE) 任务。基于 SER 任务,可以完成对图像中的文本识别与分类;基于 RE 任务,可以完成对图象中的文本内容的关系提取,如判断问题对(pair),详细说明参考[文档](vqa/README.md)
<a name="7"></a>
## 7. 模型库 ## 7. 模型库
PP-Structure系列模型列表(更新中) PP-Structure系列模型列表(更新中)
* 版面分析模型 ### 7.1 版面分析模型
|模型名称|模型简介|下载地址| |模型名称|模型简介|下载地址|
| --- | --- | --- | | --- | --- | --- |
| ppyolov2_r50vd_dcn_365e_publaynet | PubLayNet 数据集训练的版面分析模型,可以划分**文字、标题、表格、图片以及列表**5类区域 | [PubLayNet](https://paddle-model-ecology.bj.bcebos.com/model/layout-parser/ppyolov2_r50vd_dcn_365e_publaynet.tar) | | ppyolov2_r50vd_dcn_365e_publaynet | PubLayNet 数据集训练的版面分析模型,可以划分**文字、标题、表格、图片以及列表**5类区域 | [PubLayNet](https://paddle-model-ecology.bj.bcebos.com/model/layout-parser/ppyolov2_r50vd_dcn_365e_publaynet.tar) |
### 7.2 OCR和表格识别模型
* OCR和表格识别模型
|模型名称|模型简介|模型大小|下载地址| |模型名称|模型简介|模型大小|下载地址|
| --- | --- | --- | --- | | --- | --- | --- | --- |
|ch_ppocr_mobile_slim_v2.0_det|slim裁剪版超轻量模型,支持中英文、多语种文本检测|2.6M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/slim/ch_ppocr_mobile_v2.0_det_prune_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/slim/ch_ppocr_mobile_v2.0_det_prune_infer.tar) | |ch_PP-OCRv2_det_slim|【最新】slim量化+蒸馏版超轻量模型,支持中英文、多语种文本检测| 3M |[推理模型](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_slim_quant_infer.tar)|
|ch_ppocr_mobile_slim_v2.0_rec|slim裁剪量化版超轻量模型,支持中英文、数字识别|6M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_slim_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_slim_train.tar) | |ch_PP-OCRv2_rec_slim|【最新】slim量化版超轻量模型,支持中英文、数字识别| 9M |[推理模型](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_slim_quant_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_slim_quant_train.tar) |
|en_ppocr_mobile_v2.0_table_structure|PubLayNet数据集训练的英文表格场景的表格结构预测|18.6M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_structure_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.1/table/en_ppocr_mobile_v2.0_table_structure_train.tar) | |en_ppocr_mobile_v2.0_table_structure|PubLayNet数据集训练的英文表格场景的表格结构预测|18.6M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_structure_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.1/table/en_ppocr_mobile_v2.0_table_structure_train.tar) |
* DOC-VQA 模型 ### 7.2 DOC-VQA 模型
|模型名称|模型简介|模型大小|下载地址| |模型名称|模型简介|模型大小|下载地址|
| --- | --- | --- | --- | | --- | --- | --- | --- |
|PP-Layout_v1.0_ser_pretrained|基于LayoutXLM在xfun中文数据集上训练的SER模型|1.4G|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/pplayout/PP-Layout_v1.0_ser_pretrained.tar) | |ser_LayoutXLM_xfun_zhd|基于LayoutXLM在xfun中文数据集上训练的SER模型|1.4G|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutXLM_xfun_zh.tar) |
|PP-Layout_v1.0_re_pretrained|基于LayoutXLM在xfun中文数据集上训练的RE模型|1.4G|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/pplayout/PP-Layout_v1.0_re_pretrained.tar) | |re_LayoutXLM_xfun_zh|基于LayoutXLM在xfun中文数据集上训练的RE模型|1.4G|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/pplayout/re_LayoutXLM_xfun_zh.tar) |
更多模型下载,可以参考 [PPOCR model_list](../doc/doc_en/models_list.md) and [PPStructure model_list](./docs/model_list.md) 更多模型下载,可以参考 [PP-OCR model_list](../doc/doc_ch/models_list.md) and [PP-Structure model_list](./docs/models_list.md)
\ No newline at end of file
- [快速安装](#快速安装)
- [1. PaddlePaddle 和 PaddleOCR](#1-paddlepaddle-和-paddleocr)
- [2. 安装其他依赖](#2-安装其他依赖)
- [2.1 版面分析所需 Layout-Parser](#21-版面分析所需--layout-parser)
- [2.2 VQA所需依赖](#22--vqa所需依赖)
# 快速安装 # 快速安装
## 1. PaddlePaddle 和 PaddleOCR ## 1. PaddlePaddle 和 PaddleOCR
......
- [关键信息提取(Key Information Extraction)](#关键信息提取key-information-extraction)
- [1. 快速使用](#1-快速使用)
- [2. 执行训练](#2-执行训练)
- [3. 执行评估](#3-执行评估)
- [4. 参考文献](#4-参考文献)
# 关键信息提取(Key Information Extraction) # 关键信息提取(Key Information Extraction)
...@@ -7,11 +11,6 @@ ...@@ -7,11 +11,6 @@
SDMGR是一个关键信息提取算法,将每个检测到的文本区域分类为预定义的类别,如订单ID、发票号码,金额等。 SDMGR是一个关键信息提取算法,将每个检测到的文本区域分类为预定义的类别,如订单ID、发票号码,金额等。
* [1. 快速使用](#1-----)
* [2. 执行训练](#2-----)
* [3. 执行评估](#3-----)
<a name="1-----"></a>
## 1. 快速使用 ## 1. 快速使用
训练和测试的数据采用wildreceipt数据集,通过如下指令下载数据集: 训练和测试的数据采用wildreceipt数据集,通过如下指令下载数据集:
...@@ -36,7 +35,6 @@ python3.7 tools/infer_kie.py -c configs/kie/kie_unet_sdmgr.yml -o Global.checkpo ...@@ -36,7 +35,6 @@ python3.7 tools/infer_kie.py -c configs/kie/kie_unet_sdmgr.yml -o Global.checkpo
<img src="./imgs/0.png" width="800"> <img src="./imgs/0.png" width="800">
</div> </div>
<a name="2-----"></a>
## 2. 执行训练 ## 2. 执行训练
创建数据集软链到PaddleOCR/train_data目录下: 创建数据集软链到PaddleOCR/train_data目录下:
...@@ -50,7 +48,6 @@ ln -s ../../wildreceipt ./ ...@@ -50,7 +48,6 @@ ln -s ../../wildreceipt ./
``` ```
python3.7 tools/train.py -c configs/kie/kie_unet_sdmgr.yml -o Global.save_model_dir=./output/kie/ python3.7 tools/train.py -c configs/kie/kie_unet_sdmgr.yml -o Global.save_model_dir=./output/kie/
``` ```
<a name="3-----"></a>
## 3. 执行评估 ## 3. 执行评估
``` ```
...@@ -58,7 +55,7 @@ python3.7 tools/eval.py -c configs/kie/kie_unet_sdmgr.yml -o Global.checkpoints= ...@@ -58,7 +55,7 @@ python3.7 tools/eval.py -c configs/kie/kie_unet_sdmgr.yml -o Global.checkpoints=
``` ```
**参考文献:** ## 4. 参考文献
<!-- [ALGORITHM] --> <!-- [ALGORITHM] -->
......
- [Key Information Extraction(KIE)](#key-information-extractionkie)
- [1. Quick Use](#1-quick-use)
- [2. Model Training](#2-model-training)
- [3. Model Evaluation](#3-model-evaluation)
- [4. Reference](#4-reference)
# Key Information Extraction(KIE) # Key Information Extraction(KIE)
...@@ -6,13 +10,6 @@ This section provides a tutorial example on how to quickly use, train, and evalu ...@@ -6,13 +10,6 @@ This section provides a tutorial example on how to quickly use, train, and evalu
[SDMGR(Spatial Dual-Modality Graph Reasoning)](https://arxiv.org/abs/2103.14470) is a KIE algorithm that classifies each detected text region into predefined categories, such as order ID, invoice number, amount, and etc. [SDMGR(Spatial Dual-Modality Graph Reasoning)](https://arxiv.org/abs/2103.14470) is a KIE algorithm that classifies each detected text region into predefined categories, such as order ID, invoice number, amount, and etc.
* [1. Quick Use](#1-----)
* [2. Model Training](#2-----)
* [3. Model Evaluation](#3-----)
<a name="1-----"></a>
## 1. Quick Use ## 1. Quick Use
[Wildreceipt dataset](https://paperswithcode.com/dataset/wildreceipt) is used for this tutorial. It contains 1765 photos, with 25 classes, and 50000 text boxes, which can be downloaded by wget: [Wildreceipt dataset](https://paperswithcode.com/dataset/wildreceipt) is used for this tutorial. It contains 1765 photos, with 25 classes, and 50000 text boxes, which can be downloaded by wget:
...@@ -37,7 +34,6 @@ The visualization results are shown in the figure below: ...@@ -37,7 +34,6 @@ The visualization results are shown in the figure below:
<img src="./imgs/0.png" width="800"> <img src="./imgs/0.png" width="800">
</div> </div>
<a name="2-----"></a>
## 2. Model Training ## 2. Model Training
Create a softlink to the folder, `PaddleOCR/train_data`: Create a softlink to the folder, `PaddleOCR/train_data`:
...@@ -51,7 +47,6 @@ The configuration file used for training is `configs/kie/kie_unet_sdmgr.yml`. Th ...@@ -51,7 +47,6 @@ The configuration file used for training is `configs/kie/kie_unet_sdmgr.yml`. Th
```shell ```shell
python3.7 tools/train.py -c configs/kie/kie_unet_sdmgr.yml -o Global.save_model_dir=./output/kie/ python3.7 tools/train.py -c configs/kie/kie_unet_sdmgr.yml -o Global.save_model_dir=./output/kie/
``` ```
<a name="3-----"></a>
## 3. Model Evaluation ## 3. Model Evaluation
...@@ -61,7 +56,7 @@ After training, you can execute the model evaluation with the following command: ...@@ -61,7 +56,7 @@ After training, you can execute the model evaluation with the following command:
python3.7 tools/eval.py -c configs/kie/kie_unet_sdmgr.yml -o Global.checkpoints=./output/kie/best_accuracy python3.7 tools/eval.py -c configs/kie/kie_unet_sdmgr.yml -o Global.checkpoints=./output/kie/best_accuracy
``` ```
**Reference:** ## 4. Reference
<!-- [ALGORITHM] --> <!-- [ALGORITHM] -->
......
# Model List - [PP-Structure 系列模型列表](#pp-structure-系列模型列表)
- [1. LayoutParser 模型](#1-layoutparser-模型)
- [2. OCR和表格识别模型](#2-ocr和表格识别模型)
- [2.1 OCR](#21-ocr)
- [2.2 表格识别模型](#22-表格识别模型)
- [3. VQA模型](#3-vqa模型)
- [4. KIE模型](#4-kie模型)
# PP-Structure 系列模型列表
## 1. LayoutParser 模型 ## 1. LayoutParser 模型
...@@ -10,25 +19,33 @@ ...@@ -10,25 +19,33 @@
## 2. OCR和表格识别模型 ## 2. OCR和表格识别模型
### 2.1 OCR
|模型名称|模型简介|推理模型大小|下载地址| |模型名称|模型简介|推理模型大小|下载地址|
| --- | --- | --- | --- | | --- | --- | --- | --- |
|ch_ppocr_mobile_slim_v2.0_det|slim裁剪版超轻量模型,支持中英文、多语种文本检测|2.6M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/slim/ch_ppocr_mobile_v2.0_det_prune_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/slim/ch_ppocr_mobile_v2.0_det_prune_infer.tar) |
|ch_ppocr_mobile_slim_v2.0_rec|slim裁剪量化版超轻量模型,支持中英文、数字识别|6M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_slim_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_slim_train.tar) |
|en_ppocr_mobile_v2.0_table_det|PubLayNet数据集训练的英文表格场景的文字检测|4.7M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_det_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.1/table/en_ppocr_mobile_v2.0_table_det_train.tar) | |en_ppocr_mobile_v2.0_table_det|PubLayNet数据集训练的英文表格场景的文字检测|4.7M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_det_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.1/table/en_ppocr_mobile_v2.0_table_det_train.tar) |
|en_ppocr_mobile_v2.0_table_rec|PubLayNet数据集训练的英文表格场景的文字识别|6.9M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_rec_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.1/table/en_ppocr_mobile_v2.0_table_rec_train.tar) | |en_ppocr_mobile_v2.0_table_rec|PubLayNet数据集训练的英文表格场景的文字识别|6.9M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_rec_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.1/table/en_ppocr_mobile_v2.0_table_rec_train.tar) |
|en_ppocr_mobile_v2.0_table_structure|PubLayNet数据集训练的英文表格场景的表格结构预测|18.6M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_structure_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.1/table/en_ppocr_mobile_v2.0_table_structure_train.tar) |
如需要使用其他OCR模型,可以在 [model_list](../../doc/doc_ch/models_list.md) 下载模型或者使用自己训练好的模型配置到`det_model_dir`,`rec_model_dir`两个字段即可。 如需要使用其他OCR模型,可以在 [PP-OCR model_list](../../doc/doc_ch/models_list.md) 下载模型或者使用自己训练好的模型配置到 `det_model_dir`, `rec_model_dir`两个字段即可。
### 2.2 表格识别模型
|模型名称|模型简介|推理模型大小|下载地址|
| --- | --- | --- | --- |
|en_ppocr_mobile_v2.0_table_structure|PubLayNet数据集训练的英文表格场景的表格结构预测|18.6M|[推理模型](https://paddleocr.bj.bcebos.com/dygraph_v2.0/table/en_ppocr_mobile_v2.0_table_structure_infer.tar) / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.1/table/en_ppocr_mobile_v2.0_table_structure_train.tar) |
## 3. VQA模型 ## 3. VQA模型
|模型名称|模型简介|推理模型大小|下载地址| |模型名称|模型简介|推理模型大小|下载地址|
| --- | --- | --- | --- | | --- | --- | --- | --- |
|PP-Layout_v1.0_ser_pretrained|基于LayoutXLM在xfun中文数据集上训练的SER模型|1.4G|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/pplayout/re_LayoutXLM_xfun_zh.tar) | |ser_LayoutXLM_xfun_zh|基于LayoutXLM在xfun中文数据集上训练的SER模型|1.4G|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/pplayout/re_LayoutXLM_xfun_zh.tar) |
|PP-Layout_v1.0_re_pretrained|基于LayoutXLM在xfun中文数据集上训练的RE模型|1.4G|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutXLM_xfun_zh.tar) | |re_LayoutXLM_xfun_zh|基于LayoutXLM在xfun中文数据集上训练的RE模型|1.4G|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutXLM_xfun_zh.tar) |
|ser_LayoutLMv2_xfun_zh|基于LayoutLMv2在xfun中文数据集上训练的SER模型|778M|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutLMv2_xfun_zh.tar) |
|re_LayoutLMv2_xfun_zh|基于LayoutLMv2在xfun中文数据集上训练的RE模型|765M|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/pplayout/re_LayoutLMv2_xfun_zh.tar) |
|ser_LayoutLM_xfun_zh|基于LayoutLM在xfun中文数据集上训练的SER模型|430M|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/pplayout/ser_LayoutLM_xfun_zh.tar) |
## 3. KIE模型 ## 4. KIE模型
|模型名称|模型简介|模型大小|下载地址| |模型名称|模型简介|模型大小|下载地址|
| --- | --- | --- | --- | | --- | --- | --- | --- |
|SDMGR|关键信息提取模型|-|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.1/kie/kie_vgg16.tar)| |SDMGR|关键信息提取模型|78M|[推理模型 coming soon]() / [训练模型](https://paddleocr.bj.bcebos.com/dygraph_v2.1/kie/kie_vgg16.tar)|
# PP-Structure 快速开始 # PP-Structure 快速开始
* [1. 安装PaddleOCR whl包](#1) - [PP-Structure 快速开始](#pp-structure-快速开始)
* [2. 便捷使用](#2) - [1. 安装依赖包](#1-安装依赖包)
+ [2.1 命令行使用](#21) - [2. 便捷使用](#2-便捷使用)
+ [2.2 Python脚本使用](#22) - [2.1 命令行使用](#21-命令行使用)
+ [2.3 返回结果说明](#23) - [2.2 Python脚本使用](#22-python脚本使用)
+ [2.4 参数说明](#24) - [2.3 返回结果说明](#23-返回结果说明)
* [3. Python脚本使用](#3) - [2.4 参数说明](#24-参数说明)
- [3. Python脚本使用](#3-python脚本使用)
<a name="1"></a>
## 1. 安装依赖包 ## 1. 安装依赖包
...@@ -24,12 +22,8 @@ pip3 install -e . ...@@ -24,12 +22,8 @@ pip3 install -e .
``` ```
<a name="2"></a>
## 2. 便捷使用 ## 2. 便捷使用
<a name="21"></a>
### 2.1 命令行使用 ### 2.1 命令行使用
* 版面分析+表格识别 * 版面分析+表格识别
...@@ -41,8 +35,6 @@ paddleocr --image_dir=../doc/table/1.png --type=structure ...@@ -41,8 +35,6 @@ paddleocr --image_dir=../doc/table/1.png --type=structure
请参考:[文档视觉问答](../vqa/README.md) 请参考:[文档视觉问答](../vqa/README.md)
<a name="22"></a>
### 2.2 Python脚本使用 ### 2.2 Python脚本使用
* 版面分析+表格识别 * 版面分析+表格识别
...@@ -76,8 +68,6 @@ im_show.save('result.jpg') ...@@ -76,8 +68,6 @@ im_show.save('result.jpg')
请参考:[文档视觉问答](../vqa/README.md) 请参考:[文档视觉问答](../vqa/README.md)
<a name="23"></a>
### 2.3 返回结果说明 ### 2.3 返回结果说明
PP-Structure的返回结果为一个dict组成的list,示例如下 PP-Structure的返回结果为一个dict组成的list,示例如下
...@@ -103,8 +93,6 @@ dict 里各个字段说明如下 ...@@ -103,8 +93,6 @@ dict 里各个字段说明如下
请参考:[文档视觉问答](../vqa/README.md) 请参考:[文档视觉问答](../vqa/README.md)
<a name="24"></a>
### 2.4 参数说明 ### 2.4 参数说明
| 字段 | 说明 | 默认值 | | 字段 | 说明 | 默认值 |
...@@ -122,8 +110,6 @@ dict 里各个字段说明如下 ...@@ -122,8 +110,6 @@ dict 里各个字段说明如下
运行完成后,每张图片会在`output`字段指定的目录下有一个同名目录,图片里的每个表格会存储为一个excel,图片区域会被裁剪之后保存下来,excel文件和图片名名为表格在图片里的坐标。 运行完成后,每张图片会在`output`字段指定的目录下有一个同名目录,图片里的每个表格会存储为一个excel,图片区域会被裁剪之后保存下来,excel文件和图片名名为表格在图片里的坐标。
<a name="3"></a>
## 3. Python脚本使用 ## 3. Python脚本使用
* 版面分析+表格识别 * 版面分析+表格识别
......
English | [简体中文](README_ch.md) English | [简体中文](README_ch.md)
- [Getting Started](#getting-started)
- [1. Install whl package](#1--install-whl-package)
- [2. Quick Start](#2-quick-start)
- [3. PostProcess](#3-postprocess)
- [4. Results](#4-results)
- [5. Training](#5-training)
# Getting Started # Getting Started
[1. Install whl package](#Install)
[2. Quick Start](#QuickStart)
[3. PostProcess](#PostProcess)
[4. Results](#Results)
[5. Training](#Training)
<a name="Install"></a>
## 1. Install whl package ## 1. Install whl package
```bash ```bash
wget https://paddleocr.bj.bcebos.com/whl/layoutparser-0.0.0-py3-none-any.whl wget https://paddleocr.bj.bcebos.com/whl/layoutparser-0.0.0-py3-none-any.whl
pip install -U layoutparser-0.0.0-py3-none-any.whl pip install -U layoutparser-0.0.0-py3-none-any.whl
``` ```
<a name="QuickStart"></a>
## 2. Quick Start ## 2. Quick Start
Use LayoutParser to identify the layout of a document: Use LayoutParser to identify the layout of a document:
...@@ -77,8 +68,6 @@ The following model configurations and label maps are currently supported, which ...@@ -77,8 +68,6 @@ The following model configurations and label maps are currently supported, which
* TableBank word and TableBank latex are trained on datasets of word documents and latex documents respectively; * TableBank word and TableBank latex are trained on datasets of word documents and latex documents respectively;
* Download TableBank dataset contains both word and latex。 * Download TableBank dataset contains both word and latex。
<a name="PostProcess"></a>
## 3. PostProcess ## 3. PostProcess
Layout parser contains multiple categories, if you only want to get the detection box for a specific category (such as the "Text" category), you can use the following code: Layout parser contains multiple categories, if you only want to get the detection box for a specific category (such as the "Text" category), you can use the following code:
...@@ -119,7 +108,6 @@ Displays results with only the "Text" category: ...@@ -119,7 +108,6 @@ Displays results with only the "Text" category:
<div align="center"> <div align="center">
<img src="../../doc/table/result_text.jpg" width = "600" /> <img src="../../doc/table/result_text.jpg" width = "600" />
</div> </div>
<a name="Results"></a>
## 4. Results ## 4. Results
...@@ -134,8 +122,6 @@ Displays results with only the "Text" category: ...@@ -134,8 +122,6 @@ Displays results with only the "Text" category:
**GPU:** a single NVIDIA Tesla P40 **GPU:** a single NVIDIA Tesla P40
<a name="Training"></a>
## 5. Training ## 5. Training
The above model is based on [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection). If you want to train your own layout parser model,please refer to:[train_layoutparser_model](train_layoutparser_model.md) The above model is based on [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection). If you want to train your own layout parser model,please refer to:[train_layoutparser_model](train_layoutparser_model.md)
[English](README.md) | 简体中文 [English](README.md) | 简体中文
- [版面分析使用说明](#版面分析使用说明)
- [1. 安装whl包](#1--安装whl包)
- [2. 使用](#2-使用)
- [3. 后处理](#3-后处理)
- [4. 指标](#4-指标)
- [5. 训练版面分析模型](#5-训练版面分析模型)
# 版面分析使用说明 # 版面分析使用说明
[1. 安装whl包](#安装whl包)
[2. 使用](#使用)
[3. 后处理](#后处理)
[4. 指标](#指标)
[5. 训练版面分析模型](#训练版面分析模型)
<a name="安装whl包"></a>
## 1. 安装whl包 ## 1. 安装whl包
```bash ```bash
pip install -U https://paddleocr.bj.bcebos.com/whl/layoutparser-0.0.0-py3-none-any.whl pip install -U https://paddleocr.bj.bcebos.com/whl/layoutparser-0.0.0-py3-none-any.whl
``` ```
<a name="使用"></a>
## 2. 使用 ## 2. 使用
使用layoutparser识别给定文档的布局: 使用layoutparser识别给定文档的布局:
...@@ -76,8 +68,6 @@ show_img.show() ...@@ -76,8 +68,6 @@ show_img.show()
* TableBank word和TableBank latex分别在word文档、latex文档数据集训练; * TableBank word和TableBank latex分别在word文档、latex文档数据集训练;
* 下载的TableBank数据集里同时包含word和latex。 * 下载的TableBank数据集里同时包含word和latex。
<a name="后处理"></a>
## 3. 后处理 ## 3. 后处理
版面分析检测包含多个类别,如果只想获取指定类别(如"Text"类别)的检测框、可以使用下述代码: 版面分析检测包含多个类别,如果只想获取指定类别(如"Text"类别)的检测框、可以使用下述代码:
...@@ -119,8 +109,6 @@ show_img.show() ...@@ -119,8 +109,6 @@ show_img.show()
<img src="../../doc/table/result_text.jpg" width = "600" /> <img src="../../doc/table/result_text.jpg" width = "600" />
</div> </div>
<a name="指标"></a>
## 4. 指标 ## 4. 指标
| Dataset | mAP | CPU time cost | GPU time cost | | Dataset | mAP | CPU time cost | GPU time cost |
...@@ -134,8 +122,6 @@ show_img.show() ...@@ -134,8 +122,6 @@ show_img.show()
**GPU:** a single NVIDIA Tesla P40 **GPU:** a single NVIDIA Tesla P40
<a name="训练版面分析模型"></a>
## 5. 训练版面分析模型 ## 5. 训练版面分析模型
上述模型基于[PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection) 训练,如果您想训练自己的版面分析模型,请参考:[train_layoutparser_model](train_layoutparser_model_ch.md) 上述模型基于[PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection) 训练,如果您想训练自己的版面分析模型,请参考:[train_layoutparser_model](train_layoutparser_model_ch.md)
# Training layout-parse English | [简体中文](train_layoutparser_model_ch.md)
- [Training layout-parse](#training-layout-parse)
[1. Installation](#Installation) - [1. Installation](#1--installation)
- [1.1 Requirements](#11-requirements)
[1.1 Requirements](#Requirements) - [1.2 Install PaddleDetection](#12-install-paddledetection)
- [2. Data preparation](#2-data-preparation)
[1.2 Install PaddleDetection](#Install_PaddleDetection) - [3. Configuration](#3-configuration)
- [4. Training](#4-training)
[2. Data preparation](#Data_reparation) - [5. Prediction](#5-prediction)
- [6. Deployment](#6-deployment)
[3. Configuration](#Configuration) - [6.1 Export model](#61-export-model)
- [6.2 Inference](#62-inference)
[4. Training](#Training) # Training layout-parse
[5. Prediction](#Prediction)
[6. Deployment](#Deployment)
[6.1 Export model](#Export_model)
[6.2 Inference](#Inference)
<a name="Installation"></a>
## 1. Installation ## 1. Installation
<a name="Requirements"></a>
### 1.1 Requirements ### 1.1 Requirements
- PaddlePaddle 2.1 - PaddlePaddle 2.1
...@@ -35,8 +24,6 @@ ...@@ -35,8 +24,6 @@
- CUDA >= 10.1 - CUDA >= 10.1
- cuDNN >= 7.6 - cuDNN >= 7.6
<a name="Install_PaddleDetection"></a>
### 1.2 Install PaddleDetection ### 1.2 Install PaddleDetection
```bash ```bash
...@@ -51,8 +38,6 @@ pip install -r requirements.txt ...@@ -51,8 +38,6 @@ pip install -r requirements.txt
For more installation tutorials, please refer to: [Install doc](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/INSTALL_cn.md) For more installation tutorials, please refer to: [Install doc](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/INSTALL_cn.md)
<a name="Data_preparation"></a>
## 2. Data preparation ## 2. Data preparation
Download the [PubLayNet](https://github.com/ibm-aur-nlp/PubLayNet) dataset Download the [PubLayNet](https://github.com/ibm-aur-nlp/PubLayNet) dataset
...@@ -80,8 +65,6 @@ PubLayNet directory structure after decompressing : ...@@ -80,8 +65,6 @@ PubLayNet directory structure after decompressing :
For other datasets,please refer to [the PrepareDataSet]((https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/PrepareDataSet.md) ) For other datasets,please refer to [the PrepareDataSet]((https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/PrepareDataSet.md) )
<a name="Configuration"></a>
## 3. Configuration ## 3. Configuration
We use the `configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml` configuration for training,the configuration file is as follows We use the `configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml` configuration for training,the configuration file is as follows
...@@ -113,8 +96,6 @@ The `ppyolov2_r50vd_dcn_365e_coco.yml` configuration depends on other configurat ...@@ -113,8 +96,6 @@ The `ppyolov2_r50vd_dcn_365e_coco.yml` configuration depends on other configurat
Modify the preceding files, such as the dataset path and batch size etc. Modify the preceding files, such as the dataset path and batch size etc.
<a name="Training"></a>
## 4. Training ## 4. Training
PaddleDetection provides single-card/multi-card training mode to meet various training needs of users: PaddleDetection provides single-card/multi-card training mode to meet various training needs of users:
...@@ -146,8 +127,6 @@ python -m paddle.distributed.launch --gpus 0,1,2,3 tools/train.py -c configs/ppy ...@@ -146,8 +127,6 @@ python -m paddle.distributed.launch --gpus 0,1,2,3 tools/train.py -c configs/ppy
Note: If you encounter "`Out of memory error`" , try reducing `batch_size` in the `ppyolov2_reader.yml` file Note: If you encounter "`Out of memory error`" , try reducing `batch_size` in the `ppyolov2_reader.yml` file
prediction<a name="Prediction"></a>
## 5. Prediction ## 5. Prediction
Set parameters and use PaddleDetection to predict: Set parameters and use PaddleDetection to predict:
...@@ -159,14 +138,10 @@ python tools/infer.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml --infer ...@@ -159,14 +138,10 @@ python tools/infer.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml --infer
`--draw_threshold` is an optional parameter. According to the calculation of [NMS](https://ieeexplore.ieee.org/document/1699659), different threshold will produce different results, ` keep_top_k ` represent the maximum amount of output target, the default value is 10. You can set different value according to your own actual situation。 `--draw_threshold` is an optional parameter. According to the calculation of [NMS](https://ieeexplore.ieee.org/document/1699659), different threshold will produce different results, ` keep_top_k ` represent the maximum amount of output target, the default value is 10. You can set different value according to your own actual situation。
<a name="Deployment"></a>
## 6. Deployment ## 6. Deployment
Use your trained model in Layout Parser Use your trained model in Layout Parser
<a name="Export_model"></a>
### 6.1 Export model ### 6.1 Export model
n the process of model training, the model file saved contains the process of forward prediction and back propagation. In the actual industrial deployment, there is no need for back propagation. Therefore, the model should be translated into the model format required by the deployment. The `tools/export_model.py` script is provided in PaddleDetection to export the model. n the process of model training, the model file saved contains the process of forward prediction and back propagation. In the actual industrial deployment, there is no need for back propagation. Therefore, the model should be translated into the model format required by the deployment. The `tools/export_model.py` script is provided in PaddleDetection to export the model.
...@@ -183,8 +158,6 @@ The prediction model is exported to `inference/ppyolov2_r50vd_dcn_365e_coco` ,in ...@@ -183,8 +158,6 @@ The prediction model is exported to `inference/ppyolov2_r50vd_dcn_365e_coco` ,in
More model export tutorials, please refer to:[EXPORT_MODEL](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/deploy/EXPORT_MODEL.md) More model export tutorials, please refer to:[EXPORT_MODEL](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/deploy/EXPORT_MODEL.md)
<a name="Inference"></a>
### 6.2 Inference ### 6.2 Inference
`model_path` represent the trained model path, and layoutparser is used to predict: `model_path` represent the trained model path, and layoutparser is used to predict:
...@@ -194,8 +167,6 @@ import layoutparser as lp ...@@ -194,8 +167,6 @@ import layoutparser as lp
model = lp.PaddleDetectionLayoutModel(model_path="inference/ppyolov2_r50vd_dcn_365e_coco", threshold=0.5,label_map={0: "Text", 1: "Title", 2: "List", 3:"Table", 4:"Figure"},enforce_cpu=True,enable_mkldnn=True) model = lp.PaddleDetectionLayoutModel(model_path="inference/ppyolov2_r50vd_dcn_365e_coco", threshold=0.5,label_map={0: "Text", 1: "Title", 2: "List", 3:"Table", 4:"Figure"},enforce_cpu=True,enable_mkldnn=True)
``` ```
*** ***
More PaddleDetection training tutorials,please reference:[PaddleDetection Training](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/GETTING_STARTED_cn.md) More PaddleDetection training tutorials,please reference:[PaddleDetection Training](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/GETTING_STARTED_cn.md)
......
# 训练版面分析 [English](train_layoutparser_model.md) | 简体中文
- [训练版面分析](#训练版面分析)
[1. 安装](#安装) - [1. 安装](#1-安装)
- [1.1 环境要求](#11-环境要求)
[1.1 环境要求](#环境要求) - [1.2 安装PaddleDetection](#12-安装paddledetection)
- [2. 准备数据](#2-准备数据)
[1.2 安装PaddleDetection](#安装PaddleDetection) - [3. 配置文件改动和说明](#3-配置文件改动和说明)
- [4. PaddleDetection训练](#4-paddledetection训练)
[2. 准备数据](#准备数据) - [5. PaddleDetection预测](#5-paddledetection预测)
- [6. 预测部署](#6-预测部署)
[3. 配置文件改动和说明](#配置文件改动和说明) - [6.1 模型导出](#61-模型导出)
- [6.2 layout_parser预测](#62-layout_parser预测)
[4. PaddleDetection训练](#训练)
[5. PaddleDetection预测](#预测)
[6. 预测部署](#预测部署)
[6.1 模型导出](#模型导出)
[6.2 layout parser预测](#layout_parser预测)
<a name="安装"></a> # 训练版面分析
## 1. 安装 ## 1. 安装
<a name="环境要求"></a>
### 1.1 环境要求 ### 1.1 环境要求
- PaddlePaddle 2.1 - PaddlePaddle 2.1
...@@ -35,8 +24,6 @@ ...@@ -35,8 +24,6 @@
- CUDA >= 10.1 - CUDA >= 10.1
- cuDNN >= 7.6 - cuDNN >= 7.6
<a name="安装PaddleDetection"></a>
### 1.2 安装PaddleDetection ### 1.2 安装PaddleDetection
```bash ```bash
...@@ -51,8 +38,6 @@ pip install -r requirements.txt ...@@ -51,8 +38,6 @@ pip install -r requirements.txt
更多安装教程,请参考: [Install doc](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/INSTALL_cn.md) 更多安装教程,请参考: [Install doc](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/INSTALL_cn.md)
<a name="数据准备"></a>
## 2. 准备数据 ## 2. 准备数据
下载 [PubLayNet](https://github.com/ibm-aur-nlp/PubLayNet) 数据集: 下载 [PubLayNet](https://github.com/ibm-aur-nlp/PubLayNet) 数据集:
...@@ -80,8 +65,6 @@ tar -xvf publaynet.tar.gz ...@@ -80,8 +65,6 @@ tar -xvf publaynet.tar.gz
如果使用其它数据集,请参考[准备训练数据](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/PrepareDataSet.md) 如果使用其它数据集,请参考[准备训练数据](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/PrepareDataSet.md)
<a name="配置文件改动和说明"></a>
## 3. 配置文件改动和说明 ## 3. 配置文件改动和说明
我们使用 `configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml`配置进行训练,配置文件摘要如下: 我们使用 `configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml`配置进行训练,配置文件摘要如下:
...@@ -113,8 +96,6 @@ weights: output/ppyolov2_r50vd_dcn_365e_coco/model_final ...@@ -113,8 +96,6 @@ weights: output/ppyolov2_r50vd_dcn_365e_coco/model_final
根据实际情况,修改上述文件,比如数据集路径、batch size等。 根据实际情况,修改上述文件,比如数据集路径、batch size等。
<a name="训练"></a>
## 4. PaddleDetection训练 ## 4. PaddleDetection训练
PaddleDetection提供了单卡/多卡训练模式,满足用户多种训练需求 PaddleDetection提供了单卡/多卡训练模式,满足用户多种训练需求
...@@ -146,8 +127,6 @@ python -m paddle.distributed.launch --gpus 0,1,2,3 tools/train.py -c configs/ppy ...@@ -146,8 +127,6 @@ python -m paddle.distributed.launch --gpus 0,1,2,3 tools/train.py -c configs/ppy
注意:如果遇到 "`Out of memory error`" 问题, 尝试在 `ppyolov2_reader.yml` 文件中调小`batch_size` 注意:如果遇到 "`Out of memory error`" 问题, 尝试在 `ppyolov2_reader.yml` 文件中调小`batch_size`
<a name="预测"></a>
## 5. PaddleDetection预测 ## 5. PaddleDetection预测
设置参数,使用PaddleDetection预测: 设置参数,使用PaddleDetection预测:
...@@ -159,14 +138,10 @@ python tools/infer.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml --infer ...@@ -159,14 +138,10 @@ python tools/infer.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml --infer
`--draw_threshold` 是个可选参数. 根据 [NMS](https://ieeexplore.ieee.org/document/1699659) 的计算,不同阈值会产生不同的结果 `keep_top_k`表示设置输出目标的最大数量,默认值为100,用户可以根据自己的实际情况进行设定。 `--draw_threshold` 是个可选参数. 根据 [NMS](https://ieeexplore.ieee.org/document/1699659) 的计算,不同阈值会产生不同的结果 `keep_top_k`表示设置输出目标的最大数量,默认值为100,用户可以根据自己的实际情况进行设定。
<a name="预测部署"></a>
## 6. 预测部署 ## 6. 预测部署
在layout parser中使用自己训练好的模型。 在layout parser中使用自己训练好的模型。
<a name="模型导出"></a>
### 6.1 模型导出 ### 6.1 模型导出
在模型训练过程中保存的模型文件是包含前向预测和反向传播的过程,在实际的工业部署则不需要反向传播,因此需要将模型进行导成部署需要的模型格式。 在PaddleDetection中提供了 `tools/export_model.py`脚本来导出模型。 在模型训练过程中保存的模型文件是包含前向预测和反向传播的过程,在实际的工业部署则不需要反向传播,因此需要将模型进行导成部署需要的模型格式。 在PaddleDetection中提供了 `tools/export_model.py`脚本来导出模型。
...@@ -183,8 +158,6 @@ python tools/export_model.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml ...@@ -183,8 +158,6 @@ python tools/export_model.py -c configs/ppyolo/ppyolov2_r50vd_dcn_365e_coco.yml
更多模型导出教程,请参考:[EXPORT_MODEL](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/deploy/EXPORT_MODEL.md) 更多模型导出教程,请参考:[EXPORT_MODEL](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/deploy/EXPORT_MODEL.md)
<a name="layout parser预测"></a>
### 6.2 layout_parser预测 ### 6.2 layout_parser预测
`model_path`指定训练好的模型路径,使用layout parser进行预测: `model_path`指定训练好的模型路径,使用layout parser进行预测:
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment