Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
wangsen
paddle_dbnet
Commits
201cb592
Commit
201cb592
authored
Nov 16, 2021
by
LDOUBLEV
Browse files
Merge branch 'dygraph' of
https://github.com/PaddlePaddle/PaddleOCR
into test_v11
parents
9415f71d
ccd01cfe
Changes
77
Hide whitespace changes
Inline
Side-by-side
Showing
17 changed files
with
268 additions
and
117 deletions
+268
-117
test_tipc/docs/lite_log.png
test_tipc/docs/lite_log.png
+0
-0
test_tipc/docs/test_inference_cpp.md
test_tipc/docs/test_inference_cpp.md
+3
-3
test_tipc/docs/test_lite_arm_cpu_cpp.md
test_tipc/docs/test_lite_arm_cpu_cpp.md
+71
-0
test_tipc/prepare.sh
test_tipc/prepare.sh
+4
-54
test_tipc/prepare_lite.sh
test_tipc/prepare_lite.sh
+55
-0
test_tipc/readme.md
test_tipc/readme.md
+30
-13
test_tipc/test_inference_cpp.sh
test_tipc/test_inference_cpp.sh
+25
-25
test_tipc/test_lite_arm_cpu_cpp.sh
test_tipc/test_lite_arm_cpu_cpp.sh
+60
-0
tools/eval.py
tools/eval.py
+2
-2
tools/export_center.py
tools/export_center.py
+2
-2
tools/export_model.py
tools/export_model.py
+2
-2
tools/infer_cls.py
tools/infer_cls.py
+2
-2
tools/infer_det.py
tools/infer_det.py
+2
-2
tools/infer_e2e.py
tools/infer_e2e.py
+2
-2
tools/infer_rec.py
tools/infer_rec.py
+3
-5
tools/infer_table.py
tools/infer_table.py
+3
-3
tools/train.py
tools/train.py
+2
-2
No files found.
test_tipc/docs/lite_log.png
View replaced file @
9415f71d
View file @
201cb592
776 KB
|
W:
|
H:
169 KB
|
W:
|
H:
2-up
Swipe
Onion skin
test_tipc/docs/test_inference_cpp.md
View file @
201cb592
...
...
@@ -20,12 +20,12 @@ C++预测功能测试的主程序为`test_inference_cpp.sh`,可以测试基于
先运行
`prepare.sh`
准备数据和模型,然后运行
`test_inference_cpp.sh`
进行测试,最终在
```test_tipc/output```
目录下生成
`cpp_infer_*.log`
后缀的日志文件。
```
shell
bash test_tipc/prepare.sh ./test_tipc/configs/ppocr_det_mobile
_params
.txt
"cpp_infer"
bash test_tipc/prepare.sh ./test_tipc/configs/ppocr_det_mobile
/model_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu
.txt
"cpp_infer"
# 用法1:
bash test_tipc/test_inference_cpp.sh
./
test_tipc/configs/ppocr_det_mobile
_params
.txt
bash test_tipc/test_inference_cpp.sh test_tipc/configs/ppocr_det_mobile
/model_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu
.txt
# 用法2: 指定GPU卡预测,第三个传入参数为GPU卡号
bash test_tipc/test_inference_cpp.sh
./
test_tipc/configs/ppocr_det_mobile
_params
.txt
'1'
bash test_tipc/test_inference_cpp.sh test_tipc/configs/ppocr_det_mobile
/model_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu
.txt
'1'
```
运行预测指令后,在
`test_tipc/output`
文件夹下自动会保存运行日志,包括以下文件:
...
...
test_tipc/docs/test_lite.md
→
test_tipc/docs/test_lite
_arm_cpu_cpp
.md
View file @
201cb592
# Lite预测功能测试
# Lite
\_arm\_cpu\_cpp
预测功能测试
Lite预测功能测试的主程序为
`test_lite
.sh`
,可以测试
基于Lite预测库
的模型
推理功能。
Lite
\_
arm
\_
cpu
\_
cpp
预测功能测试的主程序为
`test_lite
_arm_cpu_cpp.sh`
,可以在ARM CPU上
基于Lite预测库
测试模型的C++
推理功能。
## 1. 测试结论汇总
目前Lite端的样本间支持以方式的组合:
**字段说明:**
-
输入设置:包括C++预测、python预测、java预测
-
模型类型:包括正常模型(FP32)和量化模型(FP16)
-
模型类型:包括正常模型(FP32)和量化模型(INT8)
-
batch-size:包括1和4
-
threads:包括1和4
-
predictor数量:包括多predictor预测和单predictor预测
-
功耗模式:包括高性能模式(LITE_POWER_HIGH)和省电模式(LITE_POWER_LOW)
-
预测库来源:包括下载方式和编译方式,其中编译方式分为以下目标硬件:(1)ARM CPU;(2)Linux XPU;(3)OpenCL GPU;(4)Metal GPU
-
预测库来源:包括下载方式和编译方式
| 模型类型 | batch-size | predictor数量 |
功耗模式 | 预测库来源 | 支持语言
|
| :----: | :----:
| :----: |
:----: | :----: | :----: |
| 正常模型/量化模型 | 1 | 1 |
高性能模式/省电模式
| 下载方式 |
C++预测 |
| 模型类型 | batch-size |
threads |
predictor数量 |
预测库来源
|
| :----: | :----:
|
:----: | :----: | :----: |
| 正常模型/量化模型 | 1 | 1
/4
|
1
| 下载方式 |
## 2. 测试流程
...
...
@@ -24,15 +23,15 @@ Lite预测功能测试的主程序为`test_lite.sh`,可以测试基于Lite预
### 2.1 功能测试
先运行
`prepare
.sh`
准备数据和模型,模型和数据会打包到test_lite.tar中,将
test_lite.tar上传到
手机上,解压后进
`入
test_lite`
目录中,然后运行
`test_lite.sh`
进行测试,最终在
`test_lite/output`
目录下生成
`lite_*.log`
后缀的日志文件。
先运行
`prepare
_lite.sh`
,运行后会在当前路径下生成
`test_lite.tar`
,其中包含了测试数据、测试模型和用于预测的可执行文件。将
`
test_lite.tar
`
上传到
被测试的手机上,在手机的终端解压该文件,进入
`
test_lite`
目录中,然后运行
`test_lite
_arm_cpu_cpp
.sh`
进行测试,最终在
`test_lite/output`
目录下生成
`lite_*.log`
后缀的日志文件。
```
shell
# 数据和模型准备
bash test_tipc/prepare.sh ./test_tipc/configs/ppocr_det_mobile
_params.txt
"lite_infer"
bash test_tipc/prepare
_lite
.sh ./test_tipc/configs/ppocr_det_mobile
/model_linux_gpu_normal_normal_lite_cpp_arm_cpu.txt
# 手机端测试:
bash test_lite
.sh ppocr_det_mobile_params
.txt
bash test_lite
_arm_cpu_cpp.sh model_linux_gpu_normal_normal_lite_cpp_arm_cpu
.txt
```
...
...
@@ -44,7 +43,7 @@ bash test_lite.sh ppocr_det_mobile_params.txt
运行成功时会输出:
```
Run successfully with command - ./ocr_db_crnn det
./models/ch_ppocr_mobile_v2.0_det_slim_opt.nb INT8 4 1 LITE_POWER_LOW
./test_data/icdar2015_lite/text_localization/ch4_test_images/
img_233.jpg
./config.txt True > ./output/lite_ch_
ppocr_mobile_v2.0_det_slim_opt.nb
_precision_
INT8
_batchsize_1_threads_
4_powermode_LITE_POWER_LOW_singleimg_True
.log 2>&1!
Run successfully with command - ./ocr_db_crnn det
ch_PP-OCRv2_det_infer_opt.nb ARM_CPU FP32 1 1
./test_data/icdar2015_lite/text_localization/ch4_test_images/ ./config.txt True > ./output/lite_ch_
PP-OCRv2_det_infer_opt.nb_runtime_device_ARM_CPU
_precision_
FP32
_batchsize_1_threads_
1
.log 2>&1!
Run successfully with command xxx
...
```
...
...
@@ -52,7 +51,7 @@ Run successfully with command xxx
运行失败时会输出:
```
Run failed with command - ./ocr_db_crnn det
./models/ch_ppocr_mobile_v2.0_det_slim_opt.nb INT8 4 1 LITE_POWER_LOW
./test_data/icdar2015_lite/text_localization/ch4_test_images/
img_233.jpg
./config.txt True > ./output/lite_ch_
ppocr_mobile_v2.0_det_slim_opt.nb
_precision_
INT8
_batchsize_1_threads_
4_powermode_LITE_POWER_LOW_singleimg_True
.log 2>&1!
Run failed with command - ./ocr_db_crnn det
ch_PP-OCRv2_det_infer_opt.nb ARM_CPU FP32 1 1
./test_data/icdar2015_lite/text_localization/ch4_test_images/ ./config.txt True > ./output/lite_ch_
PP-OCRv2_det_infer_opt.nb_runtime_device_ARM_CPU
_precision_
FP32
_batchsize_1_threads_
1
.log 2>&1!
Run failed with command xxx
...
```
...
...
test_tipc/prepare.sh
View file @
201cb592
#!/bin/bash
source
test_tipc/common_func.sh
FILENAME
=
$1
# MODE be one of ['lite_train_lite_infer' 'lite_train_whole_infer' 'whole_train_whole_infer',
# 'whole_infer', 'klquant_whole_infer',
# 'cpp_infer', 'serving_infer'
, 'lite_infer'
]
# 'cpp_infer', 'serving_infer']
MODE
=
$2
...
...
@@ -12,30 +14,12 @@ dataline=$(cat ${FILENAME})
# parser params
IFS
=
$'
\n
'
lines
=(
${
dataline
}
)
function
func_parser_key
(){
strs
=
$1
IFS
=
":"
array
=(
${
strs
}
)
tmp
=
${
array
[0]
}
echo
${
tmp
}
}
function
func_parser_value
(){
strs
=
$1
IFS
=
":"
array
=(
${
strs
}
)
tmp
=
${
array
[1]
}
echo
${
tmp
}
}
IFS
=
$'
\n
'
# The training params
model_name
=
$(
func_parser_value
"
${
lines
[1]
}
"
)
trainer_list
=
$(
func_parser_value
"
${
lines
[14]
}
"
)
# MODE be one of ['lite_train_lite_infer' 'lite_train_whole_infer' 'whole_train_whole_infer',
# 'whole_infer', 'klquant_whole_infer',
# 'cpp_infer', 'serving_infer', 'lite_infer']
MODE
=
$2
if
[
${
MODE
}
=
"lite_train_lite_infer"
]
;
then
# pretrain lite train data
...
...
@@ -169,40 +153,6 @@ if [ ${MODE} = "serving_infer" ];then
cd
./inference
&&
tar
xf ch_ppocr_mobile_v2.0_det_infer.tar
&&
tar
xf ch_ppocr_mobile_v2.0_rec_infer.tar
&&
tar
xf ch_ppocr_server_v2.0_rec_infer.tar
&&
tar
xf ch_ppocr_server_v2.0_det_infer.tar
&&
cd
../
fi
if
[
${
MODE
}
=
"lite_infer"
]
;
then
# prepare lite nb model and test data
current_dir
=
${
PWD
}
wget
-nc
-P
./models https://paddleocr.bj.bcebos.com/dygraph_v2.0/lite/ch_ppocr_mobile_v2.0_det_opt.nb
wget
-nc
-P
./models https://paddleocr.bj.bcebos.com/dygraph_v2.0/lite/ch_ppocr_mobile_v2.0_det_slim_opt.nb
wget
-nc
-P
./test_data https://paddleocr.bj.bcebos.com/dygraph_v2.0/test/icdar2015_lite.tar
cd
./test_data
&&
tar
-xf
icdar2015_lite.tar
&&
rm
icdar2015_lite.tar
&&
cd
../
# prepare lite env
export
http_proxy
=
http://172.19.57.45:3128
export
https_proxy
=
http://172.19.57.45:3128
paddlelite_url
=
https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.9/inference_lite_lib.android.armv8.gcc.c++_shared.with_extra.with_cv.tar.gz
paddlelite_zipfile
=
$(
echo
$paddlelite_url
|
awk
-F
"/"
'{print $NF}'
)
paddlelite_file
=
${
paddlelite_zipfile
:0:66
}
wget
${
paddlelite_url
}
tar
-xf
${
paddlelite_zipfile
}
mkdir
-p
${
paddlelite_file
}
/demo/cxx/ocr/test_lite
mv
models test_data
${
paddlelite_file
}
/demo/cxx/ocr/test_lite
cp
ppocr/utils/ppocr_keys_v1.txt deploy/lite/config.txt
${
paddlelite_file
}
/demo/cxx/ocr/test_lite
cp
./deploy/lite/
*
${
paddlelite_file
}
/demo/cxx/ocr/
cp
${
paddlelite_file
}
/cxx/lib/libpaddle_light_api_shared.so
${
paddlelite_file
}
/demo/cxx/ocr/test_lite
cp
test_tipc/configs/ppocr_det_mobile_params.txt test_tipc/test_lite.sh test_tipc/common_func.sh
${
paddlelite_file
}
/demo/cxx/ocr/test_lite
cd
${
paddlelite_file
}
/demo/cxx/ocr/
git clone https://github.com/LDOUBLEV/AutoLog.git
unset
http_proxy
unset
https_proxy
make
-j
sleep
1
make
-j
cp
ocr_db_crnn test_lite
&&
cp
test_lite/libpaddle_light_api_shared.so test_lite/libc++_shared.so
tar
-cf
test_lite.tar ./test_lite
&&
cp
test_lite.tar
${
current_dir
}
&&
cd
${
current_dir
}
fi
if
[
${
MODE
}
=
"paddle2onnx_infer"
]
;
then
# prepare serving env
python_name
=
$(
func_parser_value
"
${
lines
[2]
}
"
)
...
...
test_tipc/prepare_lite.sh
0 → 100644
View file @
201cb592
#!/bin/bash
source
./test_tipc/common_func.sh
FILENAME
=
$1
dataline
=
$(
cat
${
FILENAME
}
)
# parser params
IFS
=
$'
\n
'
lines
=(
${
dataline
}
)
IFS
=
$'
\n
'
lite_model_list
=
$(
func_parser_value
"
${
lines
[2]
}
"
)
# prepare lite .nb model
pip
install
paddlelite
==
2.9
current_dir
=
${
PWD
}
IFS
=
"|"
model_path
=
./inference_models
for
model
in
${
lite_model_list
[*]
}
;
do
inference_model_url
=
https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/
${
model
}
.tar
inference_model
=
${
inference_model_url
##*/
}
wget
-nc
-P
${
model_path
}
${
inference_model_url
}
cd
${
model_path
}
&&
tar
-xf
${
inference_model
}
&&
cd
../
model_dir
=
${
model_path
}
/
${
inference_model
%.*
}
model_file
=
${
model_dir
}
/inference.pdmodel
param_file
=
${
model_dir
}
/inference.pdiparams
paddle_lite_opt
--model_dir
=
${
model_dir
}
--model_file
=
${
model_file
}
--param_file
=
${
param_file
}
--valid_targets
=
arm
--optimize_out
=
${
model_dir
}
_opt
done
# prepare test data
data_url
=
https://paddleocr.bj.bcebos.com/dygraph_v2.0/test/icdar2015_lite.tar
model_path
=
./inference_models
inference_model
=
${
inference_model_url
##*/
}
data_file
=
${
data_url
##*/
}
wget
-nc
-P
./inference_models
${
inference_model_url
}
wget
-nc
-P
./test_data
${
data_url
}
cd
./inference_models
&&
tar
-xf
${
inference_model
}
&&
cd
../
cd
./test_data
&&
tar
-xf
${
data_file
}
&&
rm
${
data_file
}
&&
cd
../
# prepare lite env
paddlelite_url
=
https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.9/inference_lite_lib.android.armv8.gcc.c++_shared.with_extra.with_cv.tar.gz
paddlelite_zipfile
=
$(
echo
$paddlelite_url
|
awk
-F
"/"
'{print $NF}'
)
paddlelite_file
=
${
paddlelite_zipfile
:0:66
}
wget
${
paddlelite_url
}
&&
tar
-xf
${
paddlelite_zipfile
}
mkdir
-p
${
paddlelite_file
}
/demo/cxx/ocr/test_lite
cp
-r
${
model_path
}
/
*
_opt.nb test_data
${
paddlelite_file
}
/demo/cxx/ocr/test_lite
cp
ppocr/utils/ppocr_keys_v1.txt deploy/lite/config.txt
${
paddlelite_file
}
/demo/cxx/ocr/test_lite
cp
-r
./deploy/lite/
*
${
paddlelite_file
}
/demo/cxx/ocr/
cp
${
paddlelite_file
}
/cxx/lib/libpaddle_light_api_shared.so
${
paddlelite_file
}
/demo/cxx/ocr/test_lite
cp
${
FILENAME
}
test_tipc/test_lite_arm_cpu_cpp.sh test_tipc/common_func.sh
${
paddlelite_file
}
/demo/cxx/ocr/test_lite
cd
${
paddlelite_file
}
/demo/cxx/ocr/
git clone https://github.com/cuicheng01/AutoLog.git
make
-j
sleep
1
make
-j
cp
ocr_db_crnn test_lite
&&
cp
test_lite/libpaddle_light_api_shared.so test_lite/libc++_shared.so
tar
-cf
test_lite.tar ./test_lite
&&
cp
test_lite.tar
${
current_dir
}
&&
cd
${
current_dir
}
rm
-rf
${
paddlelite_file
}*
&&
rm
-rf
${
model_path
}
test_tipc/readme.md
View file @
201cb592
...
...
@@ -60,16 +60,20 @@
```
shell
test_tipc/
├── configs/
# 配置文件目录
├── det_mv3_db.yml
# 测试mobile版ppocr检测模型训练的yml文件
├── det_r50_vd_db.yml
# 测试server版ppocr检测模型训练的yml文件
├── rec_icdar15_r34_train.yml
# 测试server版ppocr识别模型训练的yml文件
├── ppocr_sys_mobile_params.txt
# 测试mobile版ppocr检测+识别模型串联的参数配置文件
├── ppocr_det_mobile_params.txt
# 测试mobile版ppocr检测模型的参数配置文件
├── ppocr_rec_mobile_params.txt
# 测试mobile版ppocr识别模型的参数配置文件
├── ppocr_sys_server_params.txt
# 测试server版ppocr检测+识别模型串联的参数配置文件
├── ppocr_det_server_params.txt
# 测试server版ppocr检测模型的参数配置文件
├── ppocr_rec_server_params.txt
# 测试server版ppocr识别模型的参数配置文件
├── ...
├── ppocr_det_mobile
# ppocr_det_mobile模型的测试配置文件目录
├── det_mv3_db.yml
# 测试mobile版ppocr检测模型训练的yml文件
├── train_infer_python.txt.txt
# 测试Linux上python训练预测(基础训练预测)的配置文件
├── model_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
# 测试Linux上c++预测的配置文件
├── model_linux_gpu_normal_normal_infer_python_jetson.txt
# 测试Jetson上python预测的配置文件
├── train_linux_gpu_fleet_amp_infer_python_linux_gpu_cpu.txt
# 测试Linux上多机多卡、混合精度训练和python预测的配置文件
├── ...
├── ppocr_det_server
# ppocr_det_server模型的测试配置文件目录
├── ...
├── ppocr_rec_mobile
# ppocr_rec_mobile模型的测试配置文件目录
├── ...
├── ppocr_rec_server
# ppocr_rec_server模型的测试配置文件目录
├── ...
├── ...
├── results/
# 预先保存的预测结果,用于和实际预测结果进行精读比对
├── python_ppocr_det_mobile_results_fp32.txt
# 预存的mobile版ppocr检测模型python预测fp32精度的结果
├── python_ppocr_det_mobile_results_fp16.txt
# 预存的mobile版ppocr检测模型python预测fp16精度的结果
...
...
@@ -80,11 +84,22 @@ test_tipc/
├── test_train_inference_python.sh
# 测试python训练预测的主程序
├── test_inference_cpp.sh
# 测试c++预测的主程序
├── test_serving.sh
# 测试serving部署预测的主程序
├── test_lite
.sh
# 测试lite
部署
预测的主程序
├── test_lite
_arm_cpu_cpp.sh
# 测试lite
在arm_cpu上部署的C++
预测的主程序
├── compare_results.py
# 用于对比log中的预测结果与results中的预存结果精度误差是否在限定范围内
└── readme.md
# 使用文档
```
### 配置文件命名规范
在
`configs`
目录下,按模型名称划分为子目录,子目录中存放所有该模型测试需要用到的配置文件,配置文件的命名遵循如下规范:
1.
基础训练预测配置简单命名为:
`train_infer_python.txt`
,表示
**Linux环境下单机、不使用混合精度训练+python预测**
,其完整命名对应
`train_linux_gpu_fleet_amp_infer_python_linux_gpu_cpu.txt`
,由于本配置文件使用频率较高,这里进行了名称简化。
2.
其他带训练配置命名格式为:
`train_训练硬件环境(linux_gpu/linux_dcu/…)_是否多机(fleet/normal)_是否混合精度(amp/normal)_预测模式(infer/lite/serving/js)_语言(cpp/python/java)_预测硬件环境(linux_gpu/mac/jetson/opencl_arm_gpu/...).txt`
。如,linux gpu下多机多卡+混合精度链条测试对应配置
`train_linux_gpu_fleet_amp_infer_python_linux_gpu_cpu.txt`
,linux dcu下基础训练预测对应配置
`train_linux_dcu_normal_normal_infer_python_dcu.txt`
。
3.
仅预测的配置(如serving、lite等)命名格式:
`model_训练硬件环境(linux_gpu/linux_dcu/…)_是否多机(fleet/normal)_是否混合精度(amp/normal)_(infer/lite/serving/js)_语言(cpp/python/java)_预测硬件环境(linux_gpu/mac/jetson/opencl_arm_gpu/...).txt`
,即,与2相比,仅第一个字段从train换为model,测试时模型直接下载获取,这里的“训练硬件环境”表示所测试的模型是在哪种环境下训练得到的。
根据上述命名规范,可以直接从配置文件名看出对应的测试场景和功能。
### 测试流程
使用本工具,可以测试不同功能的支持情况,以及预测结果是否对齐,测试流程如下:
<div
align=
"center"
>
...
...
@@ -99,7 +114,8 @@ test_tipc/
-
`test_train_inference_python.sh`
:测试基于Python的模型训练、评估、推理等基本功能,包括裁剪、量化、蒸馏。
-
`test_inference_cpp.sh`
:测试基于C++的模型推理。
-
`test_serving.sh`
:测试基于Paddle Serving的服务化部署功能。
-
`test_lite.sh`
:测试基于Paddle-Lite的端侧预测部署功能。
-
`test_lite_arm_cpu_cpp.sh`
:测试基于Paddle-Lite的ARM CPU端c++预测部署功能。
-
`test_paddle2onnx.sh`
:测试Paddle2ONNX的模型转化功能,并验证正确性。
<a
name=
"more"
></a>
#### 更多教程
...
...
@@ -107,4 +123,5 @@ test_tipc/
[
test_train_inference_python 使用
](
docs/test_train_inference_python.md
)
[
test_inference_cpp 使用
](
docs/test_inference_cpp.md
)
[
test_serving 使用
](
docs/test_serving.md
)
[
test_lite 使用
](
docs/test_lite.md
)
[
test_lite_arm_cpu_cpp 使用
](
docs/test_lite_arm_cpu_cpp.md
)
[
test_paddle2onnx 使用
](
docs/test_paddle2onnx.md
)
test_tipc/test_inference_cpp.sh
View file @
201cb592
...
...
@@ -2,38 +2,38 @@
source
test_tipc/common_func.sh
FILENAME
=
$1
dataline
=
$(
awk
'NR==
52
, NR==
6
6{print}'
$FILENAME
)
dataline
=
$(
awk
'NR==
1
, NR==
1
6{print}'
$FILENAME
)
# parser params
IFS
=
$'
\n
'
lines
=(
${
dataline
}
)
# parser cpp inference model
use_opencv
=
$(
func_parser_value
"
${
lines
[1]
}
"
)
cpp_infer_model_dir_list
=
$(
func_parser_value
"
${
lines
[2]
}
"
)
cpp_infer_is_quant
=
$(
func_parser_value
"
${
lines
[3]
}
"
)
model_name
=
$(
func_parser_value
"
${
lines
[1]
}
"
)
use_opencv
=
$(
func_parser_value
"
${
lines
[2]
}
"
)
cpp_infer_model_dir_list
=
$(
func_parser_value
"
${
lines
[3]
}
"
)
cpp_infer_is_quant
=
$(
func_parser_value
"
${
lines
[4]
}
"
)
# parser cpp inference
inference_cmd
=
$(
func_parser_value
"
${
lines
[4]
}
"
)
cpp_use_gpu_key
=
$(
func_parser_key
"
${
lines
[5]
}
"
)
cpp_use_gpu_list
=
$(
func_parser_value
"
${
lines
[5]
}
"
)
cpp_use_mkldnn_key
=
$(
func_parser_key
"
${
lines
[6]
}
"
)
cpp_use_mkldnn_list
=
$(
func_parser_value
"
${
lines
[6]
}
"
)
cpp_cpu_threads_key
=
$(
func_parser_key
"
${
lines
[7]
}
"
)
cpp_cpu_threads_list
=
$(
func_parser_value
"
${
lines
[7]
}
"
)
cpp_batch_size_key
=
$(
func_parser_key
"
${
lines
[8]
}
"
)
cpp_batch_size_list
=
$(
func_parser_value
"
${
lines
[8]
}
"
)
cpp_use_trt_key
=
$(
func_parser_key
"
${
lines
[9]
}
"
)
cpp_use_trt_list
=
$(
func_parser_value
"
${
lines
[9]
}
"
)
cpp_precision_key
=
$(
func_parser_key
"
${
lines
[10]
}
"
)
cpp_precision_list
=
$(
func_parser_value
"
${
lines
[10]
}
"
)
cpp_infer_model_key
=
$(
func_parser_key
"
${
lines
[11]
}
"
)
cpp_image_dir_key
=
$(
func_parser_key
"
${
lines
[12]
}
"
)
cpp_infer_img_dir
=
$(
func_parser_value
"
${
lines
[12]
}
"
)
cpp_infer_key1
=
$(
func_parser_key
"
${
lines
[13]
}
"
)
cpp_infer_value1
=
$(
func_parser_value
"
${
lines
[13]
}
"
)
cpp_benchmark_key
=
$(
func_parser_key
"
${
lines
[14]
}
"
)
cpp_benchmark_value
=
$(
func_parser_value
"
${
lines
[14]
}
"
)
inference_cmd
=
$(
func_parser_value
"
${
lines
[5]
}
"
)
cpp_use_gpu_key
=
$(
func_parser_key
"
${
lines
[6]
}
"
)
cpp_use_gpu_list
=
$(
func_parser_value
"
${
lines
[6]
}
"
)
cpp_use_mkldnn_key
=
$(
func_parser_key
"
${
lines
[7]
}
"
)
cpp_use_mkldnn_list
=
$(
func_parser_value
"
${
lines
[7]
}
"
)
cpp_cpu_threads_key
=
$(
func_parser_key
"
${
lines
[8]
}
"
)
cpp_cpu_threads_list
=
$(
func_parser_value
"
${
lines
[8]
}
"
)
cpp_batch_size_key
=
$(
func_parser_key
"
${
lines
[9]
}
"
)
cpp_batch_size_list
=
$(
func_parser_value
"
${
lines
[9]
}
"
)
cpp_use_trt_key
=
$(
func_parser_key
"
${
lines
[10]
}
"
)
cpp_use_trt_list
=
$(
func_parser_value
"
${
lines
[10]
}
"
)
cpp_precision_key
=
$(
func_parser_key
"
${
lines
[11]
}
"
)
cpp_precision_list
=
$(
func_parser_value
"
${
lines
[11]
}
"
)
cpp_infer_model_key
=
$(
func_parser_key
"
${
lines
[12]
}
"
)
cpp_image_dir_key
=
$(
func_parser_key
"
${
lines
[13]
}
"
)
cpp_infer_img_dir
=
$(
func_parser_value
"
${
lines
[13]
}
"
)
cpp_infer_key1
=
$(
func_parser_key
"
${
lines
[14]
}
"
)
cpp_infer_value1
=
$(
func_parser_value
"
${
lines
[14]
}
"
)
cpp_benchmark_key
=
$(
func_parser_key
"
${
lines
[15]
}
"
)
cpp_benchmark_value
=
$(
func_parser_value
"
${
lines
[15]
}
"
)
LOG_PATH
=
"./test_tipc/output"
mkdir
-p
${
LOG_PATH
}
...
...
test_tipc/test_lite.sh
→
test_tipc/test_lite
_arm_cpu_cpp
.sh
View file @
201cb592
...
...
@@ -3,8 +3,7 @@ source ./common_func.sh
export
LD_LIBRARY_PATH
=
${
PWD
}
:
$LD_LIBRARY_PATH
FILENAME
=
$1
dataline
=
$(
awk
'NR==102, NR==111{print}'
$FILENAME
)
echo
$dataline
dataline
=
$(
cat
$FILENAME
)
# parser params
IFS
=
$'
\n
'
lines
=(
${
dataline
}
)
...
...
@@ -12,13 +11,14 @@ lines=(${dataline})
# parser lite inference
lite_inference_cmd
=
$(
func_parser_value
"
${
lines
[1]
}
"
)
lite_model_dir_list
=
$(
func_parser_value
"
${
lines
[2]
}
"
)
lite_cpu_threads_list
=
$(
func_parser_value
"
${
lines
[3]
}
"
)
lite_batch_size_list
=
$(
func_parser_value
"
${
lines
[4]
}
"
)
lite_power_mode_list
=
$(
func_parser_value
"
${
lines
[5]
}
"
)
lite_infer_img_dir_list
=
$(
func_parser_value
"
${
lines
[6]
}
"
)
lite_config_dir
=
$(
func_parser_value
"
${
lines
[7]
}
"
)
lite_rec_dict_dir
=
$(
func_parser_value
"
${
lines
[8]
}
"
)
lite_benchmark_value
=
$(
func_parser_value
"
${
lines
[9]
}
"
)
runtime_device
=
$(
func_parser_value
"
${
lines
[3]
}
"
)
lite_cpu_threads_list
=
$(
func_parser_value
"
${
lines
[4]
}
"
)
lite_batch_size_list
=
$(
func_parser_value
"
${
lines
[5]
}
"
)
lite_infer_img_dir_list
=
$(
func_parser_value
"
${
lines
[8]
}
"
)
lite_config_dir
=
$(
func_parser_value
"
${
lines
[9]
}
"
)
lite_rec_dict_dir
=
$(
func_parser_value
"
${
lines
[10]
}
"
)
lite_benchmark_value
=
$(
func_parser_value
"
${
lines
[11]
}
"
)
LOG_PATH
=
"./output"
mkdir
-p
${
LOG_PATH
}
...
...
@@ -37,23 +37,14 @@ function func_lite(){
else
precision
=
"FP32"
fi
is_single_img
=
$(
echo
$_img_dir
|
grep
-E
".jpg|.jpeg|.png|.JPEG|.JPG"
)
if
[[
"
$is_single_img
"
!=
""
]]
;
then
single_img
=
"True"
else
single_img
=
"False"
fi
# lite inference
for
num_threads
in
${
lite_cpu_threads_list
[*]
}
;
do
for
power_mode
in
${
lite_power_mode_list
[*]
}
;
do
for
batchsize
in
${
lite_batch_size_list
[*]
}
;
do
model_name
=
$(
echo
$lite_model
|
awk
-F
"/"
'{print $NF}'
)
_save_log_path
=
"
${
_log_path
}
/lite_
${
model_name
}
_precision_
${
precision
}
_batchsize_
${
batchsize
}
_threads_
${
num_threads
}
_powermode_
${
power_mode
}
_singleimg_
${
single_img
}
.log"
command
=
"
${
_script
}
${
lite_model
}
${
precision
}
${
num_threads
}
${
batchsize
}
${
power_mode
}
${
_img_dir
}
${
_config
}
${
lite_benchmark_value
}
>
${
_save_log_path
}
2>&1"
eval
${
command
}
status_check
$?
"
${
command
}
"
"
${
status_log
}
"
done
for
batchsize
in
${
lite_batch_size_list
[*]
}
;
do
_save_log_path
=
"
${
_log_path
}
/lite_
${
_lite_model
}
_runtime_device_
${
runtime_device
}
_precision_
${
precision
}
_batchsize_
${
batchsize
}
_threads_
${
num_threads
}
.log"
command
=
"
${
_script
}
${
_lite_model
}
${
runtime_device
}
${
precision
}
${
num_threads
}
${
batchsize
}
${
_img_dir
}
${
_config
}
${
lite_benchmark_value
}
>
${
_save_log_path
}
2>&1"
eval
${
command
}
status_check
$?
"
${
command
}
"
"
${
status_log
}
"
done
done
}
...
...
@@ -64,6 +55,6 @@ IFS="|"
for
lite_model
in
${
lite_model_dir_list
[*]
}
;
do
#run lite inference
for
img_dir
in
${
lite_infer_img_dir_list
[*]
}
;
do
func_lite
"
${
lite_inference_cmd
}
"
"
${
lite_model
}
"
"
${
LOG_PATH
}
"
"
${
img_dir
}
"
"
${
lite_config_dir
}
"
func_lite
"
${
lite_inference_cmd
}
"
"
${
lite_model
}
_opt.nb
"
"
${
LOG_PATH
}
"
"
${
img_dir
}
"
"
${
lite_config_dir
}
"
done
done
tools/eval.py
View file @
201cb592
...
...
@@ -27,7 +27,7 @@ from ppocr.data import build_dataloader
from
ppocr.modeling.architectures
import
build_model
from
ppocr.postprocess
import
build_post_process
from
ppocr.metrics
import
build_metric
from
ppocr.utils.save_load
import
init
_model
,
load_dygraph_params
from
ppocr.utils.save_load
import
load
_model
from
ppocr.utils.utility
import
print_dict
import
tools.program
as
program
...
...
@@ -60,7 +60,7 @@ def main():
else
:
model_type
=
None
best_model_dict
=
load_
dygraph_params
(
config
,
model
,
logger
,
None
)
best_model_dict
=
load_
model
(
config
,
model
)
if
len
(
best_model_dict
):
logger
.
info
(
'metric in ckpt ***************'
)
for
k
,
v
in
best_model_dict
.
items
():
...
...
tools/export_center.py
View file @
201cb592
...
...
@@ -27,7 +27,7 @@ sys.path.append(os.path.abspath(os.path.join(__dir__, '..')))
from
ppocr.data
import
build_dataloader
from
ppocr.modeling.architectures
import
build_model
from
ppocr.postprocess
import
build_post_process
from
ppocr.utils.save_load
import
init
_model
,
load_dygraph_params
from
ppocr.utils.save_load
import
load
_model
from
ppocr.utils.utility
import
print_dict
import
tools.program
as
program
...
...
@@ -57,7 +57,7 @@ def main():
model
=
build_model
(
config
[
'Architecture'
])
best_model_dict
=
load_
dygraph_params
(
config
,
model
,
logger
,
None
)
best_model_dict
=
load_
model
(
config
,
model
)
if
len
(
best_model_dict
):
logger
.
info
(
'metric in ckpt ***************'
)
for
k
,
v
in
best_model_dict
.
items
():
...
...
tools/export_model.py
View file @
201cb592
...
...
@@ -26,7 +26,7 @@ from paddle.jit import to_static
from
ppocr.modeling.architectures
import
build_model
from
ppocr.postprocess
import
build_post_process
from
ppocr.utils.save_load
import
init
_model
from
ppocr.utils.save_load
import
load
_model
from
ppocr.utils.logging
import
get_logger
from
tools.program
import
load_config
,
merge_config
,
ArgsParser
...
...
@@ -107,7 +107,7 @@ def main():
else
:
# base rec model
config
[
"Architecture"
][
"Head"
][
"out_channels"
]
=
char_num
model
=
build_model
(
config
[
"Architecture"
])
init
_model
(
config
,
model
)
load
_model
(
config
,
model
)
model
.
eval
()
save_path
=
config
[
"Global"
][
"save_inference_dir"
]
...
...
tools/infer_cls.py
View file @
201cb592
...
...
@@ -32,7 +32,7 @@ import paddle
from
ppocr.data
import
create_operators
,
transform
from
ppocr.modeling.architectures
import
build_model
from
ppocr.postprocess
import
build_post_process
from
ppocr.utils.save_load
import
init
_model
from
ppocr.utils.save_load
import
load
_model
from
ppocr.utils.utility
import
get_image_file_list
import
tools.program
as
program
...
...
@@ -47,7 +47,7 @@ def main():
# build model
model
=
build_model
(
config
[
'Architecture'
])
init
_model
(
config
,
model
)
load
_model
(
config
,
model
)
# create data ops
transforms
=
[]
...
...
tools/infer_det.py
View file @
201cb592
...
...
@@ -34,7 +34,7 @@ import paddle
from
ppocr.data
import
create_operators
,
transform
from
ppocr.modeling.architectures
import
build_model
from
ppocr.postprocess
import
build_post_process
from
ppocr.utils.save_load
import
init
_model
,
load_dygraph_params
from
ppocr.utils.save_load
import
load
_model
from
ppocr.utils.utility
import
get_image_file_list
import
tools.program
as
program
...
...
@@ -59,7 +59,7 @@ def main():
# build model
model
=
build_model
(
config
[
'Architecture'
])
_
=
load_
dygraph_params
(
config
,
model
,
logger
,
None
)
load_
model
(
config
,
model
)
# build post process
post_process_class
=
build_post_process
(
config
[
'PostProcess'
])
...
...
tools/infer_e2e.py
View file @
201cb592
...
...
@@ -34,7 +34,7 @@ import paddle
from
ppocr.data
import
create_operators
,
transform
from
ppocr.modeling.architectures
import
build_model
from
ppocr.postprocess
import
build_post_process
from
ppocr.utils.save_load
import
init
_model
from
ppocr.utils.save_load
import
load
_model
from
ppocr.utils.utility
import
get_image_file_list
import
tools.program
as
program
...
...
@@ -68,7 +68,7 @@ def main():
# build model
model
=
build_model
(
config
[
'Architecture'
])
init
_model
(
config
,
model
)
load
_model
(
config
,
model
)
# build post process
post_process_class
=
build_post_process
(
config
[
'PostProcess'
],
...
...
tools/infer_rec.py
View file @
201cb592
...
...
@@ -33,7 +33,7 @@ import paddle
from
ppocr.data
import
create_operators
,
transform
from
ppocr.modeling.architectures
import
build_model
from
ppocr.postprocess
import
build_post_process
from
ppocr.utils.save_load
import
init
_model
from
ppocr.utils.save_load
import
load
_model
from
ppocr.utils.utility
import
get_image_file_list
import
tools.program
as
program
...
...
@@ -58,7 +58,7 @@ def main():
model
=
build_model
(
config
[
'Architecture'
])
init
_model
(
config
,
model
)
load
_model
(
config
,
model
)
# create data ops
transforms
=
[]
...
...
@@ -75,9 +75,7 @@ def main():
'gsrm_slf_attn_bias1'
,
'gsrm_slf_attn_bias2'
]
elif
config
[
'Architecture'
][
'algorithm'
]
==
"SAR"
:
op
[
op_name
][
'keep_keys'
]
=
[
'image'
,
'valid_ratio'
]
op
[
op_name
][
'keep_keys'
]
=
[
'image'
,
'valid_ratio'
]
else
:
op
[
op_name
][
'keep_keys'
]
=
[
'image'
]
transforms
.
append
(
op
)
...
...
tools/infer_table.py
View file @
201cb592
...
...
@@ -34,11 +34,12 @@ from paddle.jit import to_static
from
ppocr.data
import
create_operators
,
transform
from
ppocr.modeling.architectures
import
build_model
from
ppocr.postprocess
import
build_post_process
from
ppocr.utils.save_load
import
init
_model
from
ppocr.utils.save_load
import
load
_model
from
ppocr.utils.utility
import
get_image_file_list
import
tools.program
as
program
import
cv2
def
main
(
config
,
device
,
logger
,
vdl_writer
):
global_config
=
config
[
'Global'
]
...
...
@@ -53,7 +54,7 @@ def main(config, device, logger, vdl_writer):
model
=
build_model
(
config
[
'Architecture'
])
init
_model
(
config
,
model
,
logger
)
load
_model
(
config
,
model
)
# create data ops
transforms
=
[]
...
...
@@ -104,4 +105,3 @@ def main(config, device, logger, vdl_writer):
if
__name__
==
'__main__'
:
config
,
device
,
logger
,
vdl_writer
=
program
.
preprocess
()
main
(
config
,
device
,
logger
,
vdl_writer
)
tools/train.py
View file @
201cb592
...
...
@@ -35,7 +35,7 @@ from ppocr.losses import build_loss
from
ppocr.optimizer
import
build_optimizer
from
ppocr.postprocess
import
build_post_process
from
ppocr.metrics
import
build_metric
from
ppocr.utils.save_load
import
init
_model
,
load_dygraph_params
from
ppocr.utils.save_load
import
load
_model
import
tools.program
as
program
dist
.
get_world_size
()
...
...
@@ -97,7 +97,7 @@ def main(config, device, logger, vdl_writer):
# build metric
eval_class
=
build_metric
(
config
[
'Metric'
])
# load pretrain model
pre_best_model_dict
=
load_
dygraph_params
(
config
,
model
,
logger
,
optimizer
)
pre_best_model_dict
=
load_
model
(
config
,
model
,
optimizer
)
logger
.
info
(
'train dataloader has {} iters'
.
format
(
len
(
train_dataloader
)))
if
valid_dataloader
is
not
None
:
logger
.
info
(
'valid dataloader has {} iters'
.
format
(
...
...
Prev
1
2
3
4
Next
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment