# Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
importargparse
fromsrc.utils.argsimportArgumentGroup
# yapf: disable
parser=argparse.ArgumentParser(__doc__)
model_g=ArgumentGroup(parser,"model","model configuration and paths.")
model_g.add_arg("ernie_config_path",str,None,"Path to the json file for ernie model config.")
model_g.add_arg("init_checkpoint",str,None,"Init checkpoint to resume training from.")
model_g.add_arg("init_pretraining_params",str,None,"Init pre-training params which preforms fine-tuning from. If the arg 'init_checkpoint' has been set, this argument wouldn't be valid.")
model_g.add_arg("checkpoints",str,"checkpoints","Path to save checkpoints.")
log_g.add_arg("skip_steps",int,10,"The steps interval to print loss.")
log_g.add_arg("verbose",bool,False,"Whether to output verbose log.")
data_g=ArgumentGroup(parser,"data","Data paths, vocab paths and data processing options")
data_g.add_arg("tokenizer",str,"FullTokenizer","ATTENTION: the INPUT must be splited by Word with blank while using SentencepieceTokenizer or WordsegTokenizer")
data_g.add_arg("train_set",str,None,"Path to training data.")
data_g.add_arg("test_set",str,None,"Path to test data.")
data_g.add_arg("dev_set",str,None,"Path to validation data.")
data_g.add_arg("max_seq_len",int,512,"Number of words of the longest seqence.")
data_g.add_arg("q_max_seq_len",int,32,"Number of words of the longest seqence.")
data_g.add_arg("p_max_seq_len",int,256,"Number of words of the longest seqence.")
data_g.add_arg("train_data_size",int,0,"Number of training data's total examples. Set for distribute.")
data_g.add_arg("batch_size",int,32,"Total examples' number in batch for training. see also --in_tokens.")
data_g.add_arg("predict_batch_size",int,None,"Total examples' number in batch for predict. see also --in_tokens.")
data_g.add_arg("in_tokens",bool,False,"If set, the batch size will be the maximum number of tokens in one batch. Otherwise, it will be the maximum number of examples in one batch.")
data_g.add_arg("do_lower_case",bool,True,"Whether to lower case the input text. Should be True for uncased models and False for cased models.")
This Information Extraction (IE) guide introduces our open-source industry-grade solution that covers the most widely-used application scenarios of Information Extraction. It features **multi-domain, multi-task, and cross-modal capabilities** and goes through the full lifecycle of **data labeling, model training and model deployment**. We hope this guide can help you apply Information Extraction techniques in your own products or models.
Information Extraction (IE) is the process of extracting structured information from given input data such as text, pictures or scanned document. While IE brings immense value, applying IE techniques is never easy with challenges such as domain adaptation, heterogeneous structures, lack of labeled data, etc. This PaddleNLP Information Extraction Guide builds on the foundation of our work in [Universal Information Extraction] (https://arxiv.org/abs/2203.12277) and provides an industrial-level solution that not only supports **extracting entities, relations, events and opinions from plain text**, but also supports **cross-modal extraction out of documents, tables and pictures.** Our method features a flexible prompt, which allows you to specify extraction targets with simple natural language. We also provide a few different domain-adapated models specialized for different industry sectors.
**Highlights:**
-**Comprehensive Coverage🎓:** Covers various mainstream tasks of information extraction for plain text and document scenarios, supports multiple languages
-**State-of-the-Art Performance🏃:** Strong performance from the UIE model series models in plain text and multimodal datasets. We also provide pretrained models of various sizes to meet different needs
-**Easy to use⚡:** three lines of code to use our `Taskflow` for out-of-box Information Extraction capabilities. One line of command to model training and model deployment
-**Efficient Tuning✊:** Developers can easily get started with the data labeling and model training process without a background in Machine Learning.
<aname="2"></a>
## 2. Features
<aname="21"></a>
### 2.1 Available Models
Multiple model selection, satisfying accuracy and speed, and adapting to different information extraction scenarios.
| Model Name | Usage Scenarios | Supporting Tasks |
| `uie-base`<br/>`uie-medium`<br/>`uie-mini`<br/>`uie-micro`<br/>`uie-nano` | For **plain text** The **extractive** model of the scene supports **Chinese** | Supports entity, relation, event, opinion extraction |
| `uie-base-en` | An **extractive** model for **plain text** scenarios, supports **English** | Supports entity, relation, event, opinion extraction |
| `uie-m-base`<br/>`uie-m-large` | An **extractive** model for **plain text** scenarios, supporting **Chinese and English** | Supports entity, relation, event, opinion extraction |
| <b>`uie-x-base`</b> | An **extractive** model for **plain text** and **document** scenarios, supports **Chinese and English** | Supports entity, relation, event, opinion extraction on both plain text and documents/pictures/tables |
<aname="22"></a>
### 2.2 Performance
The UIE model series uses the ERNIE 3.0 lightweight models as the pre-trained language models and was finetuned on a large amount of information extraction data so that the model can be adapted to a fixed prompt.
- Experimental results on Chinese dataset
We conducted experiments on the in-house test sets of the three different domains of Internet, medical care, and finance:
0-shot means that no training data is directly used for prediction through ```paddlenlp.Taskflow```, and 5-shot means that each category contains 5 pieces of labeled data for model fine-tuning. **Experiments show that UIE can further improve the performance with a small amount of data (few-shot)**.
- Experimental results on multimodal datasets
We experimented on the zero-shot performance of UIE-X on the in-house multi-modal test sets in three different domains of general, financial, and medical:
The general test set contains complex samples from different fields and is the most difficult task.
<aname="23"></a>
### 2.3 Full Development Lifecycle
**Research stage**
- At this stage, the target requirements are open and there is no labeled data. We provide a simple way of using Taskflow out-of-the-box with three lines of code, which allows you to build POC without any labeled data.
-[Text Extraction Taskflow User Guide](./taskflow_text_en.md)
-[Document Extraction Taskflow User Guide](./taskflow_doc_en.md)
**Data preparation stage**
- We recommend finetuning your own information extraction model for your use case. We provide Label Studio labeling solutions for different extraction scenarios. Based on this solution, the seamless connection from data labeling to training data construction can be realized, which greatly reduces the time cost of data labeling and model customization.
-[Document Extraction and Labeling Guide](./label_studio_doc_en.md).
**Model fine-tuning and closed domain distillation**
- Based on UIE's few-shot capabilities, it realizes low-cost model customization and adaptation. At the same time, it provides an acceleration solution for closed domain distillation to solve the problem of slow extraction speed.
-[Example of the whole process of text information extraction](./text/README_en.md)
-[Example of document information extraction process](./document/README_en.md)
**Model Deployment**
- Provide an HTTP deployment solution to quickly implement the deployment and launch of customized models.
推荐使用 [Trainer API ](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/docs/trainer.md) 对模型进行微调。只需输入模型、数据集等就可以使用 Trainer API 高效快速地进行预训练、微调和模型压缩等任务,可以一键启动多卡训练、混合精度训练、梯度累积、断点重启、日志显示等功能,Trainer API 还针对训练过程的通用训练配置做了封装,比如:优化器、学习率调度等。