Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
wangsen
paddle_dbnet
Commits
8bdc050c
Unverified
Commit
8bdc050c
authored
Oct 26, 2021
by
Bin Lu
Committed by
GitHub
Oct 26, 2021
Browse files
Merge branch 'PaddlePaddle:dygraph' into dygraph
parents
7da39b93
cc01a59b
Changes
78
Hide whitespace changes
Inline
Side-by-side
Showing
20 changed files
with
313 additions
and
70 deletions
+313
-70
PTDN/docs/test_train_inference_python.md
PTDN/docs/test_train_inference_python.md
+119
-0
PTDN/prepare.sh
PTDN/prepare.sh
+1
-1
PTDN/readme.md
PTDN/readme.md
+110
-0
PTDN/results/cpp_ppocr_det_mobile_results_fp16.txt
PTDN/results/cpp_ppocr_det_mobile_results_fp16.txt
+0
-0
PTDN/results/cpp_ppocr_det_mobile_results_fp32.txt
PTDN/results/cpp_ppocr_det_mobile_results_fp32.txt
+0
-0
PTDN/results/python_ppocr_det_mobile_results_fp16.txt
PTDN/results/python_ppocr_det_mobile_results_fp16.txt
+0
-0
PTDN/results/python_ppocr_det_mobile_results_fp32.txt
PTDN/results/python_ppocr_det_mobile_results_fp32.txt
+0
-0
PTDN/test_inference_cpp.sh
PTDN/test_inference_cpp.sh
+5
-1
PTDN/test_serving.sh
PTDN/test_serving.sh
+40
-36
PTDN/test_train_inference_python.sh
PTDN/test_train_inference_python.sh
+38
-22
configs/rec/ch_PP-OCRv2/ch_PP-OCRv2_rec.yml
configs/rec/ch_PP-OCRv2/ch_PP-OCRv2_rec.yml
+0
-1
configs/rec/ch_PP-OCRv2/ch_PP-OCRv2_rec_distillation.yml
configs/rec/ch_PP-OCRv2/ch_PP-OCRv2_rec_distillation.yml
+0
-1
configs/rec/ch_PP-OCRv2/ch_PP-OCRv2_rec_enhanced_ctc_loss.yml
...igs/rec/ch_PP-OCRv2/ch_PP-OCRv2_rec_enhanced_ctc_loss.yml
+0
-1
configs/rec/ch_ppocr_v2.0/rec_chinese_common_train_v2.0.yml
configs/rec/ch_ppocr_v2.0/rec_chinese_common_train_v2.0.yml
+0
-1
configs/rec/ch_ppocr_v2.0/rec_chinese_lite_train_v2.0.yml
configs/rec/ch_ppocr_v2.0/rec_chinese_lite_train_v2.0.yml
+0
-1
configs/rec/multi_language/rec_arabic_lite_train.yml
configs/rec/multi_language/rec_arabic_lite_train.yml
+0
-1
configs/rec/multi_language/rec_cyrillic_lite_train.yml
configs/rec/multi_language/rec_cyrillic_lite_train.yml
+0
-1
configs/rec/multi_language/rec_devanagari_lite_train.yml
configs/rec/multi_language/rec_devanagari_lite_train.yml
+0
-1
configs/rec/multi_language/rec_en_number_lite_train.yml
configs/rec/multi_language/rec_en_number_lite_train.yml
+0
-1
configs/rec/multi_language/rec_french_lite_train.yml
configs/rec/multi_language/rec_french_lite_train.yml
+0
-1
No files found.
tests
/docs/test_python.md
→
PTDN
/docs/test_
train_inference_
python.md
View file @
8bdc050c
#
Python
功能测试
#
基础训练预测
功能测试
Python
功能测试的主程序为
`test_python.sh`
,可以测试基于Python的模型训练、评估、推理等基本功能,包括裁剪、量化、蒸馏。
基础训练预测
功能测试的主程序为
`test_
train_inference_
python.sh`
,可以测试基于Python的模型训练、评估、推理等基本功能,包括裁剪、量化、蒸馏。
## 测试结论汇总
##
1.
测试结论汇总
-
训练相关:
-
训练相关:
| 算法名称 | 模型名称 | 单机单卡 | 单机多卡 | 多机多卡 | 模型压缩(单机多卡) |
| 算法名称 | 模型名称 | 单机单卡 | 单机多卡 | 多机多卡 | 模型压缩(单机多卡) |
| :---- | :---- | :---- | :---- | :---- | :---- |
| :---- | :---- | :---- | :---- | :---- | :---- |
| DB | ch_ppocr_mobile_v2.0_det| 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | 正常训练:FPGM裁剪、PACT量化 |
| DB | ch_ppocr_mobile_v2.0_det| 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | 正常训练:FPGM裁剪、PACT量化
<br>
离线量化(无需训练) |
| DB | ch_ppocr_server_v2.0_det| 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | 正常训练:FPGM裁剪、PACT量化 |
| DB | ch_ppocr_server_v2.0_det| 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | 正常训练:FPGM裁剪、PACT量化
<br>
离线量化(无需训练) |
| CRNN | ch_ppocr_mobile_v2.0_rec| 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | 正常训练:FPGM裁剪、PACT量化 |
| CRNN | ch_ppocr_mobile_v2.0_rec| 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | 正常训练:PACT量化
<br>
离线量化(无需训练) |
| CRNN | ch_ppocr_server_v2.0_rec| 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | 正常训练:FPGM裁剪、PACT量化 |
| CRNN | ch_ppocr_server_v2.0_rec| 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | 正常训练:PACT量化
<br>
离线量化(无需训练) |
|PP-OCR| ch_ppocr_mobile_v2.0| 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | 正常训练:FPGM裁剪、PACT量化 |
|PP-OCR| ch_ppocr_mobile_v2.0| 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | - |
|PP-OCR| ch_ppocr_server_v2.0| 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | 正常训练:FPGM裁剪、PACT量化 |
|PP-OCR| ch_ppocr_server_v2.0| 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | - |
|PP-OCRv2| ch_PP-OCRv2 | 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | 正常训练
<br>
混合精度 | - |
-
预测相关:
-
预测相关:
基于训练是否使用量化,可以将训练产出的模型可以分为
`正常模型`
和
`量化模型`
,这两类模型对应的预测功能汇总如下,
| 算法名称 | 模型名称 |device | batchsize | mkldnn | cpu多线程 | tensorrt | 离线量化 |
| 模型类型 |device | batchsize | tensorrt | mkldnn | cpu多线程 |
| ---- | ---- | ---- | ---- | ---- | ---- | ----| --- |
| ---- | ---- | ---- | :----: | :----: | :----: |
| DB |ch_ppocr_mobile_v2.0_det| CPU/GPU | 1/6 | 支持 | 支持 | fp32/fp16/int8 | 支持 |
| 正常模型 | GPU | 1/6 | fp32/fp16 | - | - |
| DB |ch_ppocr_server_v2.0_det| CPU/GPU | 1/6 | 支持 | 支持 | fp32/fp16/int8 | 支持 |
| 正常模型 | CPU | 1/6 | - | fp32 | 支持 |
| CRNN |ch_ppocr_mobile_v2.0_rec| CPU/GPU | 1/6 | 支持 | 支持 | fp32/fp16/int8 | 支持 |
| 量化模型 | GPU | 1/6 | int8 | - | - |
| CRNN |ch_ppocr_server_v2.0_rec| CPU/GPU | 1/6 | 支持 | 支持 | fp32/fp16/int8 | 支持 |
| 量化模型 | CPU | 1/6 | - | int8 | 支持 |
|PP-OCR|ch_ppocr_server_v2.0 | CPU/GPU | 1/6 | 支持 | 支持 | fp32/fp16/int8 | 支持 |
|PP-OCR|ch_ppocr_server_v2.0 | CPU/GPU | 1/6 | 支持 | 支持 | fp32/fp16/int8 | 支持 |
## 2. 测试流程
##
1.
安装依赖
##
# 2.1
安装依赖
-
安装PaddlePaddle >= 2.0
-
安装PaddlePaddle >= 2.0
-
安装PaddleOCR依赖
-
安装PaddleOCR依赖
```
```
...
@@ -46,62 +45,75 @@ Python功能测试的主程序为`test_python.sh`,可以测试基于Python的
...
@@ -46,62 +45,75 @@ Python功能测试的主程序为`test_python.sh`,可以测试基于Python的
```
```
## 2. 功能测试
### 2.2 功能测试
先运行
`prepare.sh`
准备数据和模型,然后运行
`test_python.sh`
进行测试,最终在
```tests/output```
目录下生成
`infer_*.log`
格式的日志文件。
先运行
`prepare.sh`
准备数据和模型,然后运行
`test_train_inference_python.sh`
进行测试,最终在
```PTDN/output```
目录下生成
`python_infer_*.log`
格式的日志文件。
test_python.sh包含
四
种运行模式,每种模式的运行数据不同,分别用于测试速度和精度,分别是:
`
test_
train_inference_
python.sh
`
包含
5
种运行模式,每种模式的运行数据不同,分别用于测试速度和精度,分别是:
-
模式1:lite_train_infer,使用少量数据训练,用于快速验证训练到预测的走通流程,不验证精度和速度;
-
模式1:lite_train_infer,使用少量数据训练,用于快速验证训练到预测的走通流程,不验证精度和速度;
```
shell
```
shell
bash
tests
/prepare.sh ./
tests
/configs/ppocr_det_mobile_params.txt
'lite_train_infer'
bash
PTDN
/prepare.sh ./
PTDN
/configs/ppocr_det_mobile_params.txt
'lite_train_infer'
bash
tests/test
_python.sh ./
tests
/configs/ppocr_det_mobile_params.txt
'lite_train_infer'
bash
PTDN/test_train_inference
_python.sh ./
PTDN
/configs/ppocr_det_mobile_params.txt
'lite_train_infer'
```
```
-
模式2:whole_infer,使用少量数据训练,一定量数据预测,用于验证训练后的模型执行预测,预测速度是否合理;
-
模式2:whole_infer,使用少量数据训练,一定量数据预测,用于验证训练后的模型执行预测,预测速度是否合理;
```
shell
```
shell
bash
tests
/prepare.sh ./
tests
/configs/ppocr_det_mobile_params.txt
'whole_infer'
bash
PTDN
/prepare.sh ./
PTDN
/configs/ppocr_det_mobile_params.txt
'whole_infer'
bash
tests/test
_python.sh ./
tests
/configs/ppocr_det_mobile_params.txt
'whole_infer'
bash
PTDN/test_train_inference
_python.sh ./
PTDN
/configs/ppocr_det_mobile_params.txt
'whole_infer'
```
```
-
模式3:infer
不训练,全量数据预测,走通开源模型评估、动转静,检查inference model预测时间和精度;
-
模式3:infer
,
不训练,全量数据预测,走通开源模型评估、动转静,检查inference model预测时间和精度;
```
shell
```
shell
bash
tests
/prepare.sh ./
tests
/configs/ppocr_det_mobile_params.txt
'infer'
bash
PTDN
/prepare.sh ./
PTDN
/configs/ppocr_det_mobile_params.txt
'infer'
# 用法1:
# 用法1:
bash
tests/test
_python.sh ./
tests
/configs/ppocr_det_mobile_params.txt
'infer'
bash
PTDN/test_train_inference
_python.sh ./
PTDN
/configs/ppocr_det_mobile_params.txt
'infer'
# 用法2: 指定GPU卡预测,第三个传入参数为GPU卡号
# 用法2: 指定GPU卡预测,第三个传入参数为GPU卡号
bash
tests/test
_python.sh ./
tests
/configs/ppocr_det_mobile_params.txt
'infer'
'1'
bash
PTDN/test_train_inference
_python.sh ./
PTDN
/configs/ppocr_det_mobile_params.txt
'infer'
'1'
```
```
-
模式4:whole_train_infer
,
CE: 全量数据训练,全量数据预测,验证模型训练精度,预测精度,预测速度;
-
模式4:whole_train_infer
,
CE: 全量数据训练,全量数据预测,验证模型训练精度,预测精度,预测速度;
```
shell
```
shell
bash
tests
/prepare.sh ./
tests
/configs/ppocr_det_mobile_params.txt
'whole_train_infer'
bash
PTDN
/prepare.sh ./
PTDN
/configs/ppocr_det_mobile_params.txt
'whole_train_infer'
bash
tests/test
.sh ./
tests
/configs/ppocr_det_mobile_params.txt
'whole_train_infer'
bash
PTDN/test_train_inference_python
.sh ./
PTDN
/configs/ppocr_det_mobile_params.txt
'whole_train_infer'
```
```
-
模式5:klquant_infer,测试离线量化;
```
shell
bash PTDN/prepare.sh ./PTDN/configs/ppocr_det_mobile_params.txt
'klquant_infer'
bash PTDN/test_train_inference_python.sh PTDN/configs/ppocr_det_mobile_params.txt
'klquant_infer'
```
## 3. 精度测试
### 2.3 精度测试
使用compare_results.py脚本比较模型预测的结果是否符合预期,主要步骤包括:
使用compare_results.py脚本比较模型预测的结果是否符合预期,主要步骤包括:
-
提取日志中的预测坐标;
-
提取日志中的预测坐标;
-
从本地文件中提取保存好的坐标结果;
-
从本地文件中提取保存好的坐标结果;
-
比较上述两个结果是否符合精度预期,误差大于设置阈值时会报错。
-
比较上述两个结果是否符合精度预期,误差大于设置阈值时会报错。
### 使用方式
###
#
使用方式
运行命令:
运行命令:
```
shell
```
shell
python3.7
tests
/compare_results.py
--gt_file
=
./
tests
/results/
*
.txt
--log_file
=
./
tests
/output/
infer
_
*
.log
--atol
=
1e-3
--rtol
=
1e-3
python3.7
PTDN
/compare_results.py
--gt_file
=
./
PTDN
/results/
python_
*
.txt
--log_file
=
./
PTDN
/output/
python
_
*
.log
--atol
=
1e-3
--rtol
=
1e-3
```
```
参数介绍:
参数介绍:
-
gt_file: 指向事先保存好的预测结果路径,支持
*.txt 结尾,会自动索引*
.txt格式的文件,文件默认保存在
tests
/result/ 文件夹下
-
gt_file: 指向事先保存好的预测结果路径,支持
*.txt 结尾,会自动索引*
.txt格式的文件,文件默认保存在
PTDN
/result/ 文件夹下
-
log_file: 指向运行
tests/test
.sh 脚本的infer模式保存的预测日志,预测日志中打印的有预测结果,比如:文本框,预测文本,类别等等,同样支持infer_
*
.log格式传入
-
log_file: 指向运行
PTDN/test_train_inference_python
.sh 脚本的infer模式保存的预测日志,预测日志中打印的有预测结果,比如:文本框,预测文本,类别等等,同样支持
python_
infer_
*
.log格式传入
-
atol: 设置的绝对误差
-
atol: 设置的绝对误差
-
rtol: 设置的相对误差
-
rtol: 设置的相对误差
### 运行结果
###
#
运行结果
正常运行效果如下图:
正常运行效果如下图:
<img
src=
"compare_right.png"
width=
"1000"
>
<img
src=
"compare_right.png"
width=
"1000"
>
出现不一致结果时的运行输出:
出现不一致结果时的运行输出:
<img
src=
"compare_wrong.png"
width=
"1000"
>
<img
src=
"compare_wrong.png"
width=
"1000"
>
## 3. 更多教程
本文档为功能测试用,更丰富的训练预测使用教程请参考:
[
模型训练
](
https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_ch/training.md
)
[
基于Python预测引擎推理
](
https://github.com/PaddlePaddle/PaddleOCR/blob/dygraph/doc/doc_ch/inference.md
)
tests
/prepare.sh
→
PTDN
/prepare.sh
View file @
8bdc050c
...
@@ -134,5 +134,5 @@ if [ ${MODE} = "serving_infer" ];then
...
@@ -134,5 +134,5 @@ if [ ${MODE} = "serving_infer" ];then
wget
-nc
-P
./inference https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_infer.tar
wget
-nc
-P
./inference https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_rec_infer.tar
wget
-nc
-P
./inference https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_infer.tar
wget
-nc
-P
./inference https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_det_infer.tar
wget
-nc
-P
./inference https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_infer.tar
wget
-nc
-P
./inference https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_server_v2.0_rec_infer.tar
cd
./inference
&&
tar
xf ch_ppocr_mobile_v2.0_det_infer.tar
&&
tar
xf ch_ppocr_mobile_v2.0_rec_infer.tar
&&
tar
xf ch_ppocr_server_v2.0_rec_infer.tar
&&
tar
xf ch_ppocr_server_v2.0_det_infer.tar
cd
../
cd
./inference
&&
tar
xf ch_ppocr_mobile_v2.0_det_infer.tar
&&
tar
xf ch_ppocr_mobile_v2.0_rec_infer.tar
&&
tar
xf ch_ppocr_server_v2.0_rec_infer.tar
&&
tar
xf ch_ppocr_server_v2.0_det_infer.tar
&&
cd
../
fi
fi
tests
/readme.md
→
PTDN
/readme.md
View file @
8bdc050c
# 推理部署导航
# 推理部署导航
飞桨除了基本的模型训练和预测,还提供了支持多端多平台的高性能推理部署工具。本文档提供了PaddleOCR中所有模型的推理部署导航,方便用户查阅每种模型的推理部署打通情况,并可以进行一键测试。
## 1. 简介
飞桨除了基本的模型训练和预测,还提供了支持多端多平台的高性能推理部署工具。本文档提供了PaddleOCR中所有模型的推理部署导航PTDN(Paddle Train Deploy Navigation),方便用户查阅每种模型的推理部署打通情况,并可以进行一键测试。
<div
align=
"center"
>
<div
align=
"center"
>
<img
src=
"docs/guide.png"
width=
"1000"
>
<img
src=
"docs/guide.png"
width=
"1000"
>
</div>
</div>
## 2. 汇总信息
打通情况汇总如下,已填写的部分表示可以使用本工具进行一键测试,未填写的表示正在支持中。
打通情况汇总如下,已填写的部分表示可以使用本工具进行一键测试,未填写的表示正在支持中。
| 算法论文 | 模型名称 | 模型类型 | python训练预测 | 其他 |
**字段说明:**
| :--- | :--- | :---- | :-------- | :---- |
-
基础训练预测:包括模型训练、Paddle Inference Python预测。
| DB |ch_ppocr_mobile_v2.0_det | 检测 | 支持 | Paddle Inference: C++预测
<br>
Paddle Serving: Python, C++
<br>
Paddle-Lite: Python, C++ / ARM CPU |
-
更多训练方式:包括多机多卡、混合精度。
| DB |ch_ppocr_server_v2.0_det | 检测 | 支持 | Paddle Inference: C++预测
<br>
Paddle Serving: Python, C++
<br>
Paddle-Lite: Python, C++ / ARM CPU |
-
模型压缩:包括裁剪、离线/在线量化、蒸馏。
-
其他预测部署:包括Paddle Inference C++预测、Paddle Serving部署、Paddle-Lite部署等。
更详细的mkldnn、Tensorrt等预测加速相关功能的支持情况可以查看各测试工具的
[
更多教程
](
#more
)
。
| 算法论文 | 模型名称 | 模型类型 | 基础
<br>
训练预测 | 更多
<br>
训练方式 | 模型压缩 | 其他预测部署 |
| :--- | :--- | :----: | :--------: | :---- | :---- | :---- |
| DB |ch_ppocr_mobile_v2.0_det | 检测 | 支持 | 多机多卡
<br>
混合精度 | FPGM裁剪
<br>
离线量化| Paddle Inference: C++
<br>
Paddle Serving: Python, C++
<br>
Paddle-Lite:
<br>
(1) ARM CPU(C++) |
| DB |ch_ppocr_server_v2.0_det | 检测 | 支持 | 多机多卡
<br>
混合精度 | FPGM裁剪
<br>
离线量化| Paddle Inference: C++
<br>
Paddle Serving: Python, C++
<br>
Paddle-Lite:
<br>
(1) ARM CPU(C++) |
| DB |ch_PP-OCRv2_det | 检测 |
| DB |ch_PP-OCRv2_det | 检测 |
| CRNN |ch_ppocr_mobile_v2.0_rec | 识别 | 支持 | Paddle Inference: C++
预测
<br>
Paddle Serving: Python, C++
<br>
Paddle-Lite:
Python, C++ /
ARM CPU |
| CRNN |ch_ppocr_mobile_v2.0_rec | 识别 | 支持 |
多机多卡
<br>
混合精度 | PACT量化
<br>
离线量化|
Paddle Inference: C++
<br>
Paddle Serving: Python, C++
<br>
Paddle-Lite:
<br>
(1)
ARM CPU
(C++)
|
| CRNN |ch_ppocr_server_v2.0_rec | 识别 | 支持 | Paddle Inference: C++
预测
<br>
Paddle Serving: Python, C++
<br>
Paddle-Lite:
Python, C++ /
ARM CPU |
| CRNN |ch_ppocr_server_v2.0_rec | 识别 | 支持 |
多机多卡
<br>
混合精度 | PACT量化
<br>
离线量化|
Paddle Inference: C++
<br>
Paddle Serving: Python, C++
<br>
Paddle-Lite:
<br>
(1)
ARM CPU
(C++)
|
| CRNN |ch_PP-OCRv2_rec | 识别 |
| CRNN |ch_PP-OCRv2_rec | 识别 |
| PP-OCR |ch_ppocr_mobile_v2.0 | 检测+识别 | 支持 | 多机多卡
<br>
混合精度 | - | Paddle Inference: C++
<br>
Paddle Serving: Python, C++
<br>
Paddle-Lite:
<br>
(1) ARM CPU(C++) |
| PP-OCR |ch_ppocr_server_v2.0 | 检测+识别 | 支持 | 多机多卡
<br>
混合精度 | - | Paddle Inference: C++
<br>
Paddle Serving: Python, C++
<br>
Paddle-Lite:
<br>
(1) ARM CPU(C++) |
|PP-OCRv2|ch_PP-OCRv2 | 检测+识别 |
| DB |det_mv3_db_v2.0 | 检测 |
| DB |det_mv3_db_v2.0 | 检测 |
| DB |det_r50_vd_db_v2.0 | 检测 |
| DB |det_r50_vd_db_v2.0 | 检测 |
| EAST |det_mv3_east_v2.0 | 检测 |
| EAST |det_mv3_east_v2.0 | 检测 |
...
@@ -39,11 +54,11 @@
...
@@ -39,11 +54,11 @@
## 一键测试工具使用
##
3.
一键测试工具使用
### 目录介绍
### 目录介绍
```
shell
```
shell
tests
/
PTDN
/
├── configs/
# 配置文件目录
├── configs/
# 配置文件目录
├── det_mv3_db.yml
# 测试mobile版ppocr检测模型训练的yml文件
├── det_mv3_db.yml
# 测试mobile版ppocr检测模型训练的yml文件
├── det_r50_vd_db.yml
# 测试server版ppocr检测模型训练的yml文件
├── det_r50_vd_db.yml
# 测试server版ppocr检测模型训练的yml文件
...
@@ -56,18 +71,18 @@ tests/
...
@@ -56,18 +71,18 @@ tests/
├── ppocr_rec_server_params.txt
# 测试server版ppocr识别模型的参数配置文件
├── ppocr_rec_server_params.txt
# 测试server版ppocr识别模型的参数配置文件
├── ...
├── ...
├── results/
# 预先保存的预测结果,用于和实际预测结果进行精读比对
├── results/
# 预先保存的预测结果,用于和实际预测结果进行精读比对
├── ppocr_det_mobile_results_fp32.txt
# 预存的mobile版ppocr检测模型fp32精度的结果
├──
python_
ppocr_det_mobile_results_fp32.txt
# 预存的mobile版ppocr检测模型
python预测
fp32精度的结果
├── ppocr_det_mobile_results_fp16.txt
# 预存的mobile版ppocr检测模型fp16精度的结果
├──
python_
ppocr_det_mobile_results_fp16.txt
# 预存的mobile版ppocr检测模型
python预测
fp16精度的结果
├── ppocr_det_mobile_results_fp32
_cpp
.txt
# 预存的mobile版ppocr检测模型c++预测的fp32精度的结果
├──
cpp_
ppocr_det_mobile_results_fp32.txt
# 预存的mobile版ppocr检测模型c++预测的fp32精度的结果
├── ppocr_det_mobile_results_fp16
_cpp
.txt
# 预存的mobile版ppocr检测模型c++预测的fp16精度的结果
├──
cpp_
ppocr_det_mobile_results_fp16.txt
# 预存的mobile版ppocr检测模型c++预测的fp16精度的结果
├── ...
├── ...
├── prepare.sh
# 完成test_*.sh运行所需要的数据和模型下载
├── prepare.sh
# 完成test_*.sh运行所需要的数据和模型下载
├── test_python.sh
# 测试python训练预测的主程序
├── test_
train_inference_
python.sh
# 测试python训练预测的主程序
├── test_cpp.sh
# 测试c++预测的主程序
├── test_
inference_
cpp.sh
# 测试c++预测的主程序
├── test_serving.sh
# 测试serving部署预测的主程序
├── test_serving.sh
# 测试serving部署预测的主程序
├── test_lite.sh
# 测试lite部署预测的主程序
├── test_lite.sh
# 测试lite部署预测的主程序
├── compare_results.py
# 用于对比log中的预测结果与results中的预存结果精度误差是否在限定范围内
├── compare_results.py
# 用于对比log中的预测结果与results中的预存结果精度误差是否在限定范围内
└── readme.md
# 使用文档
└── readme.md
# 使用文档
```
```
### 测试流程
### 测试流程
...
@@ -81,13 +96,15 @@ tests/
...
@@ -81,13 +96,15 @@ tests/
3.
用
`compare_results.py`
对比log中的预测结果和预存在results目录下的结果,判断预测精度是否符合预期(在误差范围内)。
3.
用
`compare_results.py`
对比log中的预测结果和预存在results目录下的结果,判断预测精度是否符合预期(在误差范围内)。
其中,有4个测试主程序,功能如下:
其中,有4个测试主程序,功能如下:
-
`test_python.sh`
:测试基于Python的模型训练、评估、推理等基本功能,包括裁剪、量化、蒸馏。
-
`test_
train_inference_
python.sh`
:测试基于Python的模型训练、评估、推理等基本功能,包括裁剪、量化、蒸馏。
-
`test_cpp.sh`
:测试基于C++的模型推理。
-
`test_
inference_
cpp.sh`
:测试基于C++的模型推理。
-
`test_serving.sh`
:测试基于Paddle Serving的服务化部署功能。
-
`test_serving.sh`
:测试基于Paddle Serving的服务化部署功能。
-
`test_lite.sh`
:测试基于Paddle-Lite的端侧预测部署功能。
-
`test_lite.sh`
:测试基于Paddle-Lite的端侧预测部署功能。
各功能测试中涉及GPU/CPU、mkldnn、Tensorrt等多种参数配置,点击相应链接了解更多细节和使用教程:
<a
name=
"more"
></a>
[
test_python使用
](
docs/test_python.md
)
#### 更多教程
[
test_cpp使用
](
docs/test_cpp.md
)
各功能测试中涉及混合精度、裁剪、量化等训练相关,及mkldnn、Tensorrt等多种预测相关参数配置,请点击下方相应链接了解更多细节和使用教程:
[
test_serving使用
](
docs/test_serving.md
)
[
test_train_inference_python 使用
](
docs/test_train_inference_python.md
)
[
test_lite使用
](
docs/test_lite.md
)
[
test_inference_cpp 使用
](
docs/test_inference_cpp.md
)
[
test_serving 使用
](
docs/test_serving.md
)
[
test_lite 使用
](
docs/test_lite.md
)
tests
/results/ppocr_det_mobile_results_fp16
_cpp
.txt
→
PTDN
/results/
cpp_
ppocr_det_mobile_results_fp16.txt
View file @
8bdc050c
File moved
tests
/results/ppocr_det_mobile_results_fp32
_cpp
.txt
→
PTDN
/results/
cpp_
ppocr_det_mobile_results_fp32.txt
View file @
8bdc050c
File moved
tests
/results/ppocr_det_mobile_results_fp16.txt
→
PTDN
/results/
python_
ppocr_det_mobile_results_fp16.txt
View file @
8bdc050c
File moved
tests
/results/ppocr_det_mobile_results_fp32.txt
→
PTDN
/results/
python_
ppocr_det_mobile_results_fp32.txt
View file @
8bdc050c
File moved
tests/test
_cpp.sh
→
PTDN/test_inference
_cpp.sh
View file @
8bdc050c
...
@@ -56,7 +56,11 @@ function func_cpp_inference(){
...
@@ -56,7 +56,11 @@ function func_cpp_inference(){
fi
fi
for
threads
in
${
cpp_cpu_threads_list
[*]
}
;
do
for
threads
in
${
cpp_cpu_threads_list
[*]
}
;
do
for
batch_size
in
${
cpp_batch_size_list
[*]
}
;
do
for
batch_size
in
${
cpp_batch_size_list
[*]
}
;
do
_save_log_path
=
"
${
_log_path
}
/cpp_infer_cpu_usemkldnn_
${
use_mkldnn
}
_threads_
${
threads
}
_batchsize_
${
batch_size
}
.log"
precision
=
"fp32"
if
[
${
use_mkldnn
}
=
"False"
]
&&
[
${
_flag_quant
}
=
"True"
]
;
then
precison
=
"int8"
fi
_save_log_path
=
"
${
_log_path
}
/cpp_infer_cpu_usemkldnn_
${
use_mkldnn
}
_threads_
${
threads
}
_precision_
${
precision
}
_batchsize_
${
batch_size
}
.log"
set_infer_data
=
$(
func_set_params
"
${
cpp_image_dir_key
}
"
"
${
_img_dir
}
"
)
set_infer_data
=
$(
func_set_params
"
${
cpp_image_dir_key
}
"
"
${
_img_dir
}
"
)
set_benchmark
=
$(
func_set_params
"
${
cpp_benchmark_key
}
"
"
${
cpp_benchmark_value
}
"
)
set_benchmark
=
$(
func_set_params
"
${
cpp_benchmark_key
}
"
"
${
cpp_benchmark_value
}
"
)
set_batchsize
=
$(
func_set_params
"
${
cpp_batch_size_key
}
"
"
${
batch_size
}
"
)
set_batchsize
=
$(
func_set_params
"
${
cpp_batch_size_key
}
"
"
${
batch_size
}
"
)
...
...
tests
/test_serving.sh
→
PTDN
/test_serving.sh
View file @
8bdc050c
#!/bin/bash
#!/bin/bash
source
tests
/common_func.sh
source
PTDN
/common_func.sh
FILENAME
=
$1
FILENAME
=
$1
dataline
=
$(
awk
'NR==67, NR==8
1
{print}'
$FILENAME
)
dataline
=
$(
awk
'NR==67, NR==8
3
{print}'
$FILENAME
)
# parser params
# parser params
IFS
=
$'
\n
'
IFS
=
$'
\n
'
lines
=(
${
dataline
}
)
lines
=(
${
dataline
}
)
# parser serving
# parser serving
trans_model_py
=
$(
func_parser_value
"
${
lines
[1]
}
"
)
model_name
=
$(
func_parser_value
"
${
lines
[1]
}
"
)
infer_model_dir_key
=
$(
func_parser_key
"
${
lines
[2]
}
"
)
python
=
$(
func_parser_value
"
${
lines
[2]
}
"
)
infer_model_dir_value
=
$(
func_parser_value
"
${
lines
[2]
}
"
)
trans_model_py
=
$(
func_parser_value
"
${
lines
[3]
}
"
)
model_filename_key
=
$(
func_parser_key
"
${
lines
[3]
}
"
)
infer_model_dir_key
=
$(
func_parser_key
"
${
lines
[4]
}
"
)
model_filename_value
=
$(
func_parser_value
"
${
lines
[3]
}
"
)
infer_model_dir_value
=
$(
func_parser_value
"
${
lines
[4]
}
"
)
params_filename_key
=
$(
func_parser_key
"
${
lines
[4]
}
"
)
model_filename_key
=
$(
func_parser_key
"
${
lines
[5]
}
"
)
params_filename_value
=
$(
func_parser_value
"
${
lines
[4]
}
"
)
model_filename_value
=
$(
func_parser_value
"
${
lines
[5]
}
"
)
serving_server_key
=
$(
func_parser_key
"
${
lines
[5]
}
"
)
params_filename_key
=
$(
func_parser_key
"
${
lines
[6]
}
"
)
serving_server_value
=
$(
func_parser_value
"
${
lines
[5]
}
"
)
params_filename_value
=
$(
func_parser_value
"
${
lines
[6]
}
"
)
serving_client_key
=
$(
func_parser_key
"
${
lines
[6]
}
"
)
serving_server_key
=
$(
func_parser_key
"
${
lines
[7]
}
"
)
serving_client_value
=
$(
func_parser_value
"
${
lines
[6]
}
"
)
serving_server_value
=
$(
func_parser_value
"
${
lines
[7]
}
"
)
serving_dir_value
=
$(
func_parser_value
"
${
lines
[7]
}
"
)
serving_client_key
=
$(
func_parser_key
"
${
lines
[8]
}
"
)
web_service_py
=
$(
func_parser_value
"
${
lines
[8]
}
"
)
serving_client_value
=
$(
func_parser_value
"
${
lines
[8]
}
"
)
web_use_gpu_key
=
$(
func_parser_key
"
${
lines
[9]
}
"
)
serving_dir_value
=
$(
func_parser_value
"
${
lines
[9]
}
"
)
web_use_gpu_list
=
$(
func_parser_value
"
${
lines
[9]
}
"
)
web_service_py
=
$(
func_parser_value
"
${
lines
[10]
}
"
)
web_use_mkldnn_key
=
$(
func_parser_key
"
${
lines
[10]
}
"
)
web_use_gpu_key
=
$(
func_parser_key
"
${
lines
[11]
}
"
)
web_use_mkldnn_list
=
$(
func_parser_value
"
${
lines
[10]
}
"
)
web_use_gpu_list
=
$(
func_parser_value
"
${
lines
[11]
}
"
)
web_cpu_threads_key
=
$(
func_parser_key
"
${
lines
[11]
}
"
)
web_use_mkldnn_key
=
$(
func_parser_key
"
${
lines
[12]
}
"
)
web_cpu_threads_list
=
$(
func_parser_value
"
${
lines
[11]
}
"
)
web_use_mkldnn_list
=
$(
func_parser_value
"
${
lines
[12]
}
"
)
web_use_trt_key
=
$(
func_parser_key
"
${
lines
[12]
}
"
)
web_cpu_threads_key
=
$(
func_parser_key
"
${
lines
[13]
}
"
)
web_use_trt_list
=
$(
func_parser_value
"
${
lines
[12]
}
"
)
web_cpu_threads_list
=
$(
func_parser_value
"
${
lines
[13]
}
"
)
web_precision_key
=
$(
func_parser_key
"
${
lines
[13]
}
"
)
web_use_trt_key
=
$(
func_parser_key
"
${
lines
[14]
}
"
)
web_precision_list
=
$(
func_parser_value
"
${
lines
[13]
}
"
)
web_use_trt_list
=
$(
func_parser_value
"
${
lines
[14]
}
"
)
pipeline_py
=
$(
func_parser_value
"
${
lines
[14]
}
"
)
web_precision_key
=
$(
func_parser_key
"
${
lines
[15]
}
"
)
web_precision_list
=
$(
func_parser_value
"
${
lines
[15]
}
"
)
pipeline_py
=
$(
func_parser_value
"
${
lines
[16]
}
"
)
LOG_PATH
=
"../../PTDN/output"
LOG_PATH
=
"./tests/output"
mkdir
-p
./PTDN/output
mkdir
-p
${
LOG_PATH
}
status_log
=
"
${
LOG_PATH
}
/results_serving.log"
status_log
=
"
${
LOG_PATH
}
/results_serving.log"
function
func_serving
(){
function
func_serving
(){
IFS
=
'|'
IFS
=
'|'
_python
=
$1
_python
=
$1
...
@@ -65,12 +65,12 @@ function func_serving(){
...
@@ -65,12 +65,12 @@ function func_serving(){
continue
continue
fi
fi
for
threads
in
${
web_cpu_threads_list
[*]
}
;
do
for
threads
in
${
web_cpu_threads_list
[*]
}
;
do
_save_log_path
=
"
${
_log_path
}
/serv
er_cpu_usemkldnn_
${
use_mkldnn
}
_threads_
${
threads
}
_batchsize_1.log"
_save_log_path
=
"
${
LOG_PATH
}
/server_inf
er_cpu_usemkldnn_
${
use_mkldnn
}
_threads_
${
threads
}
_batchsize_1.log"
set_cpu_threads
=
$(
func_set_params
"
${
web_cpu_threads_key
}
"
"
${
threads
}
"
)
set_cpu_threads
=
$(
func_set_params
"
${
web_cpu_threads_key
}
"
"
${
threads
}
"
)
web_service_cmd
=
"
${
python
}
${
web_service_py
}
${
web_use_gpu_key
}
=
${
use_gpu
}
${
web_use_mkldnn_key
}
=
${
use_mkldnn
}
${
set_cpu_threads
}
&>
${
_save_log_path
}
&"
web_service_cmd
=
"
${
python
}
${
web_service_py
}
${
web_use_gpu_key
}
=
${
use_gpu
}
${
web_use_mkldnn_key
}
=
${
use_mkldnn
}
${
set_cpu_threads
}
&"
eval
$web_service_cmd
eval
$web_service_cmd
sleep
2s
sleep
2s
pipeline_cmd
=
"
${
python
}
${
pipeline_py
}
"
pipeline_cmd
=
"
${
python
}
${
pipeline_py
}
>
${
_save_log_path
}
2>&1
"
eval
$pipeline_cmd
eval
$pipeline_cmd
last_status
=
${
PIPESTATUS
[0]
}
last_status
=
${
PIPESTATUS
[0]
}
eval
"cat
${
_save_log_path
}
"
eval
"cat
${
_save_log_path
}
"
...
@@ -93,13 +93,13 @@ function func_serving(){
...
@@ -93,13 +93,13 @@ function func_serving(){
if
[[
${
use_trt
}
=
"False"
||
${
precision
}
=
~
"int8"
]]
&&
[[
${
_flag_quant
}
=
"True"
]]
;
then
if
[[
${
use_trt
}
=
"False"
||
${
precision
}
=
~
"int8"
]]
&&
[[
${
_flag_quant
}
=
"True"
]]
;
then
continue
continue
fi
fi
_save_log_path
=
"
${
_log_path
}
/
infer_gpu_usetrt_
${
use_trt
}
_precision_
${
precision
}
_batchsize_1.log"
_save_log_path
=
"
${
LOG_PATH
}
/server_
infer_gpu_usetrt_
${
use_trt
}
_precision_
${
precision
}
_batchsize_1.log"
set_tensorrt
=
$(
func_set_params
"
${
web_use_trt_key
}
"
"
${
use_trt
}
"
)
set_tensorrt
=
$(
func_set_params
"
${
web_use_trt_key
}
"
"
${
use_trt
}
"
)
set_precision
=
$(
func_set_params
"
${
web_precision_key
}
"
"
${
precision
}
"
)
set_precision
=
$(
func_set_params
"
${
web_precision_key
}
"
"
${
precision
}
"
)
web_service_cmd
=
"
${
python
}
${
web_service_py
}
${
web_use_gpu_key
}
=
${
use_gpu
}
${
set_tensorrt
}
${
set_precision
}
&>
${
_save_log_path
}
& "
web_service_cmd
=
"
${
python
}
${
web_service_py
}
${
web_use_gpu_key
}
=
${
use_gpu
}
${
set_tensorrt
}
${
set_precision
}
& "
eval
$web_service_cmd
eval
$web_service_cmd
sleep
2s
sleep
2s
pipeline_cmd
=
"
${
python
}
${
pipeline_py
}
"
pipeline_cmd
=
"
${
python
}
${
pipeline_py
}
>
${
_save_log_path
}
2>&1
"
eval
$pipeline_cmd
eval
$pipeline_cmd
last_status
=
${
PIPESTATUS
[0]
}
last_status
=
${
PIPESTATUS
[0]
}
eval
"cat
${
_save_log_path
}
"
eval
"cat
${
_save_log_path
}
"
...
@@ -129,3 +129,7 @@ eval $env
...
@@ -129,3 +129,7 @@ eval $env
echo
"################### run test ###################"
echo
"################### run test ###################"
export
Count
=
0
IFS
=
"|"
func_serving
"
${
web_service_cmd
}
"
tests/test
_python.sh
→
PTDN/test_train_inference
_python.sh
View file @
8bdc050c
...
@@ -5,11 +5,7 @@ FILENAME=$1
...
@@ -5,11 +5,7 @@ FILENAME=$1
# MODE be one of ['lite_train_infer' 'whole_infer' 'whole_train_infer', 'infer', 'klquant_infer']
# MODE be one of ['lite_train_infer' 'whole_infer' 'whole_train_infer', 'infer', 'klquant_infer']
MODE
=
$2
MODE
=
$2
if
[
${
MODE
}
=
"klquant_infer"
]
;
then
dataline
=
$(
awk
'NR==1, NR==51{print}'
$FILENAME
)
dataline
=
$(
awk
'NR==82, NR==98{print}'
$FILENAME
)
else
dataline
=
$(
awk
'NR==1, NR==51{print}'
$FILENAME
)
fi
# parser params
# parser params
IFS
=
$'
\n
'
IFS
=
$'
\n
'
...
@@ -93,6 +89,8 @@ infer_value1=$(func_parser_value "${lines[50]}")
...
@@ -93,6 +89,8 @@ infer_value1=$(func_parser_value "${lines[50]}")
# parser klquant_infer
# parser klquant_infer
if
[
${
MODE
}
=
"klquant_infer"
]
;
then
if
[
${
MODE
}
=
"klquant_infer"
]
;
then
dataline
=
$(
awk
'NR==82, NR==98{print}'
$FILENAME
)
lines
=(
${
dataline
}
)
# parser inference model
# parser inference model
infer_model_dir_list
=
$(
func_parser_value
"
${
lines
[1]
}
"
)
infer_model_dir_list
=
$(
func_parser_value
"
${
lines
[1]
}
"
)
infer_export_list
=
$(
func_parser_value
"
${
lines
[2]
}
"
)
infer_export_list
=
$(
func_parser_value
"
${
lines
[2]
}
"
)
...
@@ -143,18 +141,28 @@ function func_inference(){
...
@@ -143,18 +141,28 @@ function func_inference(){
fi
fi
for
threads
in
${
cpu_threads_list
[*]
}
;
do
for
threads
in
${
cpu_threads_list
[*]
}
;
do
for
batch_size
in
${
batch_size_list
[*]
}
;
do
for
batch_size
in
${
batch_size_list
[*]
}
;
do
_save_log_path
=
"
${
_log_path
}
/python_infer_cpu_usemkldnn_
${
use_mkldnn
}
_threads_
${
threads
}
_batchsize_
${
batch_size
}
.log"
for
precision
in
${
precision_list
[*]
}
;
do
set_infer_data
=
$(
func_set_params
"
${
image_dir_key
}
"
"
${
_img_dir
}
"
)
if
[
${
use_mkldnn
}
=
"False"
]
&&
[
${
precision
}
=
"fp16"
]
;
then
set_benchmark
=
$(
func_set_params
"
${
benchmark_key
}
"
"
${
benchmark_value
}
"
)
continue
set_batchsize
=
$(
func_set_params
"
${
batch_size_key
}
"
"
${
batch_size
}
"
)
fi
# skip when enable fp16 but disable mkldnn
set_cpu_threads
=
$(
func_set_params
"
${
cpu_threads_key
}
"
"
${
threads
}
"
)
if
[
${
_flag_quant
}
=
"True"
]
&&
[
${
precision
}
!=
"int8"
]
;
then
set_model_dir
=
$(
func_set_params
"
${
infer_model_key
}
"
"
${
_model_dir
}
"
)
continue
set_infer_params1
=
$(
func_set_params
"
${
infer_key1
}
"
"
${
infer_value1
}
"
)
fi
# skip when quant model inference but precision is not int8
command
=
"
${
_python
}
${
_script
}
${
use_gpu_key
}
=
${
use_gpu
}
${
use_mkldnn_key
}
=
${
use_mkldnn
}
${
set_cpu_threads
}
${
set_model_dir
}
${
set_batchsize
}
${
set_infer_data
}
${
set_benchmark
}
${
set_infer_params1
}
>
${
_save_log_path
}
2>&1 "
set_precision
=
$(
func_set_params
"
${
precision_key
}
"
"
${
precision
}
"
)
eval
$command
last_status
=
${
PIPESTATUS
[0]
}
_save_log_path
=
"
${
_log_path
}
/python_infer_cpu_usemkldnn_
${
use_mkldnn
}
_threads_
${
threads
}
_precision_
${
precision
}
_batchsize_
${
batch_size
}
.log"
eval
"cat
${
_save_log_path
}
"
set_infer_data
=
$(
func_set_params
"
${
image_dir_key
}
"
"
${
_img_dir
}
"
)
status_check
$last_status
"
${
command
}
"
"
${
status_log
}
"
set_benchmark
=
$(
func_set_params
"
${
benchmark_key
}
"
"
${
benchmark_value
}
"
)
set_batchsize
=
$(
func_set_params
"
${
batch_size_key
}
"
"
${
batch_size
}
"
)
set_cpu_threads
=
$(
func_set_params
"
${
cpu_threads_key
}
"
"
${
threads
}
"
)
set_model_dir
=
$(
func_set_params
"
${
infer_model_key
}
"
"
${
_model_dir
}
"
)
set_infer_params1
=
$(
func_set_params
"
${
infer_key1
}
"
"
${
infer_value1
}
"
)
command
=
"
${
_python
}
${
_script
}
${
use_gpu_key
}
=
${
use_gpu
}
${
use_mkldnn_key
}
=
${
use_mkldnn
}
${
set_cpu_threads
}
${
set_model_dir
}
${
set_batchsize
}
${
set_infer_data
}
${
set_benchmark
}
${
set_precision
}
${
set_infer_params1
}
>
${
_save_log_path
}
2>&1 "
eval
$command
last_status
=
${
PIPESTATUS
[0]
}
eval
"cat
${
_save_log_path
}
"
status_check
$last_status
"
${
command
}
"
"
${
status_log
}
"
done
done
done
done
done
done
done
...
@@ -224,6 +232,9 @@ if [ ${MODE} = "infer" ] || [ ${MODE} = "klquant_infer" ]; then
...
@@ -224,6 +232,9 @@ if [ ${MODE} = "infer" ] || [ ${MODE} = "klquant_infer" ]; then
fi
fi
#run inference
#run inference
is_quant
=
${
infer_quant_flag
[Count]
}
is_quant
=
${
infer_quant_flag
[Count]
}
if
[
${
MODE
}
=
"klquant_infer"
]
;
then
is_quant
=
"True"
fi
func_inference
"
${
python
}
"
"
${
inference_py
}
"
"
${
save_infer_dir
}
"
"
${
LOG_PATH
}
"
"
${
infer_img_dir
}
"
${
is_quant
}
func_inference
"
${
python
}
"
"
${
inference_py
}
"
"
${
save_infer_dir
}
"
"
${
LOG_PATH
}
"
"
${
infer_img_dir
}
"
${
is_quant
}
Count
=
$((
$Count
+
1
))
Count
=
$((
$Count
+
1
))
done
done
...
@@ -234,6 +245,7 @@ else
...
@@ -234,6 +245,7 @@ else
for
gpu
in
${
gpu_list
[*]
}
;
do
for
gpu
in
${
gpu_list
[*]
}
;
do
use_gpu
=
${
USE_GPU_KEY
[Count]
}
use_gpu
=
${
USE_GPU_KEY
[Count]
}
Count
=
$((
$Count
+
1
))
Count
=
$((
$Count
+
1
))
ips
=
""
if
[
${
gpu
}
=
"-1"
]
;
then
if
[
${
gpu
}
=
"-1"
]
;
then
env
=
""
env
=
""
elif
[
${#
gpu
}
-le
1
]
;
then
elif
[
${#
gpu
}
-le
1
]
;
then
...
@@ -253,6 +265,11 @@ else
...
@@ -253,6 +265,11 @@ else
env
=
" "
env
=
" "
fi
fi
for
autocast
in
${
autocast_list
[*]
}
;
do
for
autocast
in
${
autocast_list
[*]
}
;
do
if
[
${
autocast
}
=
"amp"
]
;
then
set_amp_config
=
"Global.use_amp=True Global.scale_loss=1024.0 Global.use_dynamic_loss_scaling=True"
else
set_amp_config
=
" "
fi
for
trainer
in
${
trainer_list
[*]
}
;
do
for
trainer
in
${
trainer_list
[*]
}
;
do
flag_quant
=
False
flag_quant
=
False
if
[
${
trainer
}
=
${
pact_key
}
]
;
then
if
[
${
trainer
}
=
${
pact_key
}
]
;
then
...
@@ -279,7 +296,6 @@ else
...
@@ -279,7 +296,6 @@ else
if
[
${
run_train
}
=
"null"
]
;
then
if
[
${
run_train
}
=
"null"
]
;
then
continue
continue
fi
fi
set_autocast
=
$(
func_set_params
"
${
autocast_key
}
"
"
${
autocast
}
"
)
set_autocast
=
$(
func_set_params
"
${
autocast_key
}
"
"
${
autocast
}
"
)
set_epoch
=
$(
func_set_params
"
${
epoch_key
}
"
"
${
epoch_num
}
"
)
set_epoch
=
$(
func_set_params
"
${
epoch_key
}
"
"
${
epoch_num
}
"
)
set_pretrain
=
$(
func_set_params
"
${
pretrain_model_key
}
"
"
${
pretrain_model_value
}
"
)
set_pretrain
=
$(
func_set_params
"
${
pretrain_model_key
}
"
"
${
pretrain_model_value
}
"
)
...
@@ -295,11 +311,11 @@ else
...
@@ -295,11 +311,11 @@ else
set_save_model
=
$(
func_set_params
"
${
save_model_key
}
"
"
${
save_log
}
"
)
set_save_model
=
$(
func_set_params
"
${
save_model_key
}
"
"
${
save_log
}
"
)
if
[
${#
gpu
}
-le
2
]
;
then
# train with cpu or single gpu
if
[
${#
gpu
}
-le
2
]
;
then
# train with cpu or single gpu
cmd
=
"
${
python
}
${
run_train
}
${
set_use_gpu
}
${
set_save_model
}
${
set_epoch
}
${
set_pretrain
}
${
set_autocast
}
${
set_batchsize
}
${
set_train_params1
}
"
cmd
=
"
${
python
}
${
run_train
}
${
set_use_gpu
}
${
set_save_model
}
${
set_epoch
}
${
set_pretrain
}
${
set_autocast
}
${
set_batchsize
}
${
set_train_params1
}
${
set_amp_config
}
"
elif
[
${#
gpu
}
-le
15
]
;
then
# train with multi-gpu
elif
[
${#
ips
}
-le
26
]
;
then
# train with multi-gpu
cmd
=
"
${
python
}
-m paddle.distributed.launch --gpus=
${
gpu
}
${
run_train
}
${
set_save_model
}
${
set_epoch
}
${
set_pretrain
}
${
set_autocast
}
${
set_batchsize
}
${
set_train_params1
}
"
cmd
=
"
${
python
}
-m paddle.distributed.launch --gpus=
${
gpu
}
${
run_train
}
${
set_use_gpu
}
${
set_save_model
}
${
set_epoch
}
${
set_pretrain
}
${
set_autocast
}
${
set_batchsize
}
${
set_train_params1
}
${
set_amp_config
}
"
else
# train with multi-machine
else
# train with multi-machine
cmd
=
"
${
python
}
-m paddle.distributed.launch --ips=
${
ips
}
--gpus=
${
gpu
}
${
run_train
}
${
set_save_model
}
${
set_pretrain
}
${
set_epoch
}
${
set_autocast
}
${
set_batchsize
}
${
set_train_params1
}
"
cmd
=
"
${
python
}
-m paddle.distributed.launch --ips=
${
ips
}
--gpus=
${
gpu
}
${
set_use_gpu
}
${
run_train
}
${
set_save_model
}
${
set_pretrain
}
${
set_epoch
}
${
set_autocast
}
${
set_batchsize
}
${
set_train_params1
}
${
set_amp_config
}
"
fi
fi
# run train
# run train
eval
"unset CUDA_VISIBLE_DEVICES"
eval
"unset CUDA_VISIBLE_DEVICES"
...
...
configs/rec/ch_PP-OCRv2/ch_PP-OCRv2_rec.yml
View file @
8bdc050c
...
@@ -14,7 +14,6 @@ Global:
...
@@ -14,7 +14,6 @@ Global:
use_visualdl
:
false
use_visualdl
:
false
infer_img
:
doc/imgs_words/ch/word_1.jpg
infer_img
:
doc/imgs_words/ch/word_1.jpg
character_dict_path
:
ppocr/utils/ppocr_keys_v1.txt
character_dict_path
:
ppocr/utils/ppocr_keys_v1.txt
character_type
:
ch
max_text_length
:
25
max_text_length
:
25
infer_mode
:
false
infer_mode
:
false
use_space_char
:
true
use_space_char
:
true
...
...
configs/rec/ch_PP-OCRv2/ch_PP-OCRv2_rec_distillation.yml
View file @
8bdc050c
...
@@ -14,7 +14,6 @@ Global:
...
@@ -14,7 +14,6 @@ Global:
use_visualdl
:
false
use_visualdl
:
false
infer_img
:
doc/imgs_words/ch/word_1.jpg
infer_img
:
doc/imgs_words/ch/word_1.jpg
character_dict_path
:
ppocr/utils/ppocr_keys_v1.txt
character_dict_path
:
ppocr/utils/ppocr_keys_v1.txt
character_type
:
ch
max_text_length
:
25
max_text_length
:
25
infer_mode
:
false
infer_mode
:
false
use_space_char
:
true
use_space_char
:
true
...
...
configs/rec/ch_PP-OCRv2/ch_PP-OCRv2_rec_enhanced_ctc_loss.yml
View file @
8bdc050c
...
@@ -14,7 +14,6 @@ Global:
...
@@ -14,7 +14,6 @@ Global:
use_visualdl
:
false
use_visualdl
:
false
infer_img
:
doc/imgs_words/ch/word_1.jpg
infer_img
:
doc/imgs_words/ch/word_1.jpg
character_dict_path
:
ppocr/utils/ppocr_keys_v1.txt
character_dict_path
:
ppocr/utils/ppocr_keys_v1.txt
character_type
:
ch
max_text_length
:
25
max_text_length
:
25
infer_mode
:
false
infer_mode
:
false
use_space_char
:
true
use_space_char
:
true
...
...
configs/rec/ch_ppocr_v2.0/rec_chinese_common_train_v2.0.yml
View file @
8bdc050c
...
@@ -15,7 +15,6 @@ Global:
...
@@ -15,7 +15,6 @@ Global:
infer_img
:
doc/imgs_words/ch/word_1.jpg
infer_img
:
doc/imgs_words/ch/word_1.jpg
# for data or label process
# for data or label process
character_dict_path
:
ppocr/utils/ppocr_keys_v1.txt
character_dict_path
:
ppocr/utils/ppocr_keys_v1.txt
character_type
:
ch
max_text_length
:
25
max_text_length
:
25
infer_mode
:
False
infer_mode
:
False
use_space_char
:
True
use_space_char
:
True
...
...
configs/rec/ch_ppocr_v2.0/rec_chinese_lite_train_v2.0.yml
View file @
8bdc050c
...
@@ -15,7 +15,6 @@ Global:
...
@@ -15,7 +15,6 @@ Global:
infer_img
:
doc/imgs_words/ch/word_1.jpg
infer_img
:
doc/imgs_words/ch/word_1.jpg
# for data or label process
# for data or label process
character_dict_path
:
ppocr/utils/ppocr_keys_v1.txt
character_dict_path
:
ppocr/utils/ppocr_keys_v1.txt
character_type
:
ch
max_text_length
:
25
max_text_length
:
25
infer_mode
:
False
infer_mode
:
False
use_space_char
:
True
use_space_char
:
True
...
...
configs/rec/multi_language/rec_arabic_lite_train.yml
View file @
8bdc050c
...
@@ -15,7 +15,6 @@ Global:
...
@@ -15,7 +15,6 @@ Global:
use_visualdl
:
false
use_visualdl
:
false
infer_img
:
null
infer_img
:
null
character_dict_path
:
ppocr/utils/dict/arabic_dict.txt
character_dict_path
:
ppocr/utils/dict/arabic_dict.txt
character_type
:
arabic
max_text_length
:
25
max_text_length
:
25
infer_mode
:
false
infer_mode
:
false
use_space_char
:
true
use_space_char
:
true
...
...
configs/rec/multi_language/rec_cyrillic_lite_train.yml
View file @
8bdc050c
...
@@ -15,7 +15,6 @@ Global:
...
@@ -15,7 +15,6 @@ Global:
use_visualdl
:
false
use_visualdl
:
false
infer_img
:
null
infer_img
:
null
character_dict_path
:
ppocr/utils/dict/cyrillic_dict.txt
character_dict_path
:
ppocr/utils/dict/cyrillic_dict.txt
character_type
:
cyrillic
max_text_length
:
25
max_text_length
:
25
infer_mode
:
false
infer_mode
:
false
use_space_char
:
true
use_space_char
:
true
...
...
configs/rec/multi_language/rec_devanagari_lite_train.yml
View file @
8bdc050c
...
@@ -15,7 +15,6 @@ Global:
...
@@ -15,7 +15,6 @@ Global:
use_visualdl
:
false
use_visualdl
:
false
infer_img
:
null
infer_img
:
null
character_dict_path
:
ppocr/utils/dict/devanagari_dict.txt
character_dict_path
:
ppocr/utils/dict/devanagari_dict.txt
character_type
:
devanagari
max_text_length
:
25
max_text_length
:
25
infer_mode
:
false
infer_mode
:
false
use_space_char
:
true
use_space_char
:
true
...
...
configs/rec/multi_language/rec_en_number_lite_train.yml
View file @
8bdc050c
...
@@ -16,7 +16,6 @@ Global:
...
@@ -16,7 +16,6 @@ Global:
infer_img
:
infer_img
:
# for data or label process
# for data or label process
character_dict_path
:
ppocr/utils/en_dict.txt
character_dict_path
:
ppocr/utils/en_dict.txt
character_type
:
EN
max_text_length
:
25
max_text_length
:
25
infer_mode
:
False
infer_mode
:
False
use_space_char
:
True
use_space_char
:
True
...
...
configs/rec/multi_language/rec_french_lite_train.yml
View file @
8bdc050c
...
@@ -16,7 +16,6 @@ Global:
...
@@ -16,7 +16,6 @@ Global:
infer_img
:
infer_img
:
# for data or label process
# for data or label process
character_dict_path
:
ppocr/utils/dict/french_dict.txt
character_dict_path
:
ppocr/utils/dict/french_dict.txt
character_type
:
french
max_text_length
:
25
max_text_length
:
25
infer_mode
:
False
infer_mode
:
False
use_space_char
:
False
use_space_char
:
False
...
...
Prev
1
2
3
4
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment