"...composable_kernel_onnxruntime.git" did not exist on "6dfb4e7851a99eab605d239873b7eca777980fa8"
Commit 201cb592 authored by LDOUBLEV's avatar LDOUBLEV
Browse files

Merge branch 'dygraph' of https://github.com/PaddlePaddle/PaddleOCR into test_v11

parents 9415f71d ccd01cfe
test_tipc/docs/lite_log.png

776 KB | W: | H:

test_tipc/docs/lite_log.png

169 KB | W: | H:

test_tipc/docs/lite_log.png
test_tipc/docs/lite_log.png
test_tipc/docs/lite_log.png
test_tipc/docs/lite_log.png
  • 2-up
  • Swipe
  • Onion skin
...@@ -20,12 +20,12 @@ C++预测功能测试的主程序为`test_inference_cpp.sh`,可以测试基于 ...@@ -20,12 +20,12 @@ C++预测功能测试的主程序为`test_inference_cpp.sh`,可以测试基于
先运行`prepare.sh`准备数据和模型,然后运行`test_inference_cpp.sh`进行测试,最终在```test_tipc/output```目录下生成`cpp_infer_*.log`后缀的日志文件。 先运行`prepare.sh`准备数据和模型,然后运行`test_inference_cpp.sh`进行测试,最终在```test_tipc/output```目录下生成`cpp_infer_*.log`后缀的日志文件。
```shell ```shell
bash test_tipc/prepare.sh ./test_tipc/configs/ppocr_det_mobile_params.txt "cpp_infer" bash test_tipc/prepare.sh ./test_tipc/configs/ppocr_det_mobile/model_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt "cpp_infer"
# 用法1: # 用法1:
bash test_tipc/test_inference_cpp.sh ./test_tipc/configs/ppocr_det_mobile_params.txt bash test_tipc/test_inference_cpp.sh test_tipc/configs/ppocr_det_mobile/model_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt
# 用法2: 指定GPU卡预测,第三个传入参数为GPU卡号 # 用法2: 指定GPU卡预测,第三个传入参数为GPU卡号
bash test_tipc/test_inference_cpp.sh ./test_tipc/configs/ppocr_det_mobile_params.txt '1' bash test_tipc/test_inference_cpp.sh test_tipc/configs/ppocr_det_mobile/model_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt '1'
``` ```
运行预测指令后,在`test_tipc/output`文件夹下自动会保存运行日志,包括以下文件: 运行预测指令后,在`test_tipc/output`文件夹下自动会保存运行日志,包括以下文件:
......
# Lite预测功能测试 # Lite\_arm\_cpu\_cpp预测功能测试
Lite预测功能测试的主程序为`test_lite.sh`,可以测试基于Lite预测库的模型推理功能。 Lite\_arm\_cpu\_cpp预测功能测试的主程序为`test_lite_arm_cpu_cpp.sh`,可以在ARM CPU上基于Lite预测库测试模型的C++推理功能。
## 1. 测试结论汇总 ## 1. 测试结论汇总
目前Lite端的样本间支持以方式的组合: 目前Lite端的样本间支持以方式的组合:
**字段说明:** **字段说明:**
- 输入设置:包括C++预测、python预测、java预测 - 模型类型:包括正常模型(FP32)和量化模型(INT8)
- 模型类型:包括正常模型(FP32)和量化模型(FP16)
- batch-size:包括1和4 - batch-size:包括1和4
- threads:包括1和4
- predictor数量:包括多predictor预测和单predictor预测 - predictor数量:包括多predictor预测和单predictor预测
- 功耗模式:包括高性能模式(LITE_POWER_HIGH)和省电模式(LITE_POWER_LOW) - 预测库来源:包括下载方式和编译方式
- 预测库来源:包括下载方式和编译方式,其中编译方式分为以下目标硬件:(1)ARM CPU;(2)Linux XPU;(3)OpenCL GPU;(4)Metal GPU
| 模型类型 | batch-size | predictor数量 | 功耗模式 | 预测库来源 | 支持语言 | | 模型类型 | batch-size | threads | predictor数量 | 预测库来源 |
| :----: | :----: | :----: | :----: | :----: | :----: | | :----: | :----: | :----: | :----: | :----: |
| 正常模型/量化模型 | 1 | 1 | 高性能模式/省电模式 | 下载方式 | C++预测 | | 正常模型/量化模型 | 1 | 1/4 | 1 | 下载方式 |
## 2. 测试流程 ## 2. 测试流程
...@@ -24,15 +23,15 @@ Lite预测功能测试的主程序为`test_lite.sh`,可以测试基于Lite预 ...@@ -24,15 +23,15 @@ Lite预测功能测试的主程序为`test_lite.sh`,可以测试基于Lite预
### 2.1 功能测试 ### 2.1 功能测试
先运行`prepare.sh`准备数据和模型,模型和数据会打包到test_lite.tar中,将test_lite.tar上传到手机上,解压后进`入test_lite`目录中,然后运行`test_lite.sh`进行测试,最终在`test_lite/output`目录下生成`lite_*.log`后缀的日志文件。 先运行`prepare_lite.sh`,运行后会在当前路径下生成`test_lite.tar`,其中包含了测试数据、测试模型和用于预测的可执行文件。将`test_lite.tar`上传到被测试的手机上,在手机的终端解压该文件,进入`test_lite`目录中,然后运行`test_lite_arm_cpu_cpp.sh`进行测试,最终在`test_lite/output`目录下生成`lite_*.log`后缀的日志文件。
```shell ```shell
# 数据和模型准备 # 数据和模型准备
bash test_tipc/prepare.sh ./test_tipc/configs/ppocr_det_mobile_params.txt "lite_infer" bash test_tipc/prepare_lite.sh ./test_tipc/configs/ppocr_det_mobile/model_linux_gpu_normal_normal_lite_cpp_arm_cpu.txt
# 手机端测试: # 手机端测试:
bash test_lite.sh ppocr_det_mobile_params.txt bash test_lite_arm_cpu_cpp.sh model_linux_gpu_normal_normal_lite_cpp_arm_cpu.txt
``` ```
...@@ -44,7 +43,7 @@ bash test_lite.sh ppocr_det_mobile_params.txt ...@@ -44,7 +43,7 @@ bash test_lite.sh ppocr_det_mobile_params.txt
运行成功时会输出: 运行成功时会输出:
``` ```
Run successfully with command - ./ocr_db_crnn det ./models/ch_ppocr_mobile_v2.0_det_slim_opt.nb INT8 4 1 LITE_POWER_LOW ./test_data/icdar2015_lite/text_localization/ch4_test_images/img_233.jpg ./config.txt True > ./output/lite_ch_ppocr_mobile_v2.0_det_slim_opt.nb_precision_INT8_batchsize_1_threads_4_powermode_LITE_POWER_LOW_singleimg_True.log 2>&1! Run successfully with command - ./ocr_db_crnn det ch_PP-OCRv2_det_infer_opt.nb ARM_CPU FP32 1 1 ./test_data/icdar2015_lite/text_localization/ch4_test_images/ ./config.txt True > ./output/lite_ch_PP-OCRv2_det_infer_opt.nb_runtime_device_ARM_CPU_precision_FP32_batchsize_1_threads_1.log 2>&1!
Run successfully with command xxx Run successfully with command xxx
... ...
``` ```
...@@ -52,7 +51,7 @@ Run successfully with command xxx ...@@ -52,7 +51,7 @@ Run successfully with command xxx
运行失败时会输出: 运行失败时会输出:
``` ```
Run failed with command - ./ocr_db_crnn det ./models/ch_ppocr_mobile_v2.0_det_slim_opt.nb INT8 4 1 LITE_POWER_LOW ./test_data/icdar2015_lite/text_localization/ch4_test_images/img_233.jpg ./config.txt True > ./output/lite_ch_ppocr_mobile_v2.0_det_slim_opt.nb_precision_INT8_batchsize_1_threads_4_powermode_LITE_POWER_LOW_singleimg_True.log 2>&1! Run failed with command - ./ocr_db_crnn det ch_PP-OCRv2_det_infer_opt.nb ARM_CPU FP32 1 1 ./test_data/icdar2015_lite/text_localization/ch4_test_images/ ./config.txt True > ./output/lite_ch_PP-OCRv2_det_infer_opt.nb_runtime_device_ARM_CPU_precision_FP32_batchsize_1_threads_1.log 2>&1!
Run failed with command xxx Run failed with command xxx
... ...
``` ```
......
#!/bin/bash #!/bin/bash
source test_tipc/common_func.sh
FILENAME=$1 FILENAME=$1
# MODE be one of ['lite_train_lite_infer' 'lite_train_whole_infer' 'whole_train_whole_infer', # MODE be one of ['lite_train_lite_infer' 'lite_train_whole_infer' 'whole_train_whole_infer',
# 'whole_infer', 'klquant_whole_infer', # 'whole_infer', 'klquant_whole_infer',
# 'cpp_infer', 'serving_infer', 'lite_infer'] # 'cpp_infer', 'serving_infer']
MODE=$2 MODE=$2
...@@ -12,30 +14,12 @@ dataline=$(cat ${FILENAME}) ...@@ -12,30 +14,12 @@ dataline=$(cat ${FILENAME})
# parser params # parser params
IFS=$'\n' IFS=$'\n'
lines=(${dataline}) lines=(${dataline})
function func_parser_key(){
strs=$1
IFS=":"
array=(${strs})
tmp=${array[0]}
echo ${tmp}
}
function func_parser_value(){
strs=$1
IFS=":"
array=(${strs})
tmp=${array[1]}
echo ${tmp}
}
IFS=$'\n'
# The training params # The training params
model_name=$(func_parser_value "${lines[1]}") model_name=$(func_parser_value "${lines[1]}")
trainer_list=$(func_parser_value "${lines[14]}") trainer_list=$(func_parser_value "${lines[14]}")
# MODE be one of ['lite_train_lite_infer' 'lite_train_whole_infer' 'whole_train_whole_infer',
# 'whole_infer', 'klquant_whole_infer',
# 'cpp_infer', 'serving_infer', 'lite_infer']
MODE=$2
if [ ${MODE} = "lite_train_lite_infer" ];then if [ ${MODE} = "lite_train_lite_infer" ];then
# pretrain lite train data # pretrain lite train data
...@@ -169,40 +153,6 @@ if [ ${MODE} = "serving_infer" ];then ...@@ -169,40 +153,6 @@ if [ ${MODE} = "serving_infer" ];then
cd ./inference && tar xf ch_ppocr_mobile_v2.0_det_infer.tar && tar xf ch_ppocr_mobile_v2.0_rec_infer.tar && tar xf ch_ppocr_server_v2.0_rec_infer.tar && tar xf ch_ppocr_server_v2.0_det_infer.tar && cd ../ cd ./inference && tar xf ch_ppocr_mobile_v2.0_det_infer.tar && tar xf ch_ppocr_mobile_v2.0_rec_infer.tar && tar xf ch_ppocr_server_v2.0_rec_infer.tar && tar xf ch_ppocr_server_v2.0_det_infer.tar && cd ../
fi fi
if [ ${MODE} = "lite_infer" ];then
# prepare lite nb model and test data
current_dir=${PWD}
wget -nc -P ./models https://paddleocr.bj.bcebos.com/dygraph_v2.0/lite/ch_ppocr_mobile_v2.0_det_opt.nb
wget -nc -P ./models https://paddleocr.bj.bcebos.com/dygraph_v2.0/lite/ch_ppocr_mobile_v2.0_det_slim_opt.nb
wget -nc -P ./test_data https://paddleocr.bj.bcebos.com/dygraph_v2.0/test/icdar2015_lite.tar
cd ./test_data && tar -xf icdar2015_lite.tar && rm icdar2015_lite.tar && cd ../
# prepare lite env
export http_proxy=http://172.19.57.45:3128
export https_proxy=http://172.19.57.45:3128
paddlelite_url=https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.9/inference_lite_lib.android.armv8.gcc.c++_shared.with_extra.with_cv.tar.gz
paddlelite_zipfile=$(echo $paddlelite_url | awk -F "/" '{print $NF}')
paddlelite_file=${paddlelite_zipfile:0:66}
wget ${paddlelite_url}
tar -xf ${paddlelite_zipfile}
mkdir -p ${paddlelite_file}/demo/cxx/ocr/test_lite
mv models test_data ${paddlelite_file}/demo/cxx/ocr/test_lite
cp ppocr/utils/ppocr_keys_v1.txt deploy/lite/config.txt ${paddlelite_file}/demo/cxx/ocr/test_lite
cp ./deploy/lite/* ${paddlelite_file}/demo/cxx/ocr/
cp ${paddlelite_file}/cxx/lib/libpaddle_light_api_shared.so ${paddlelite_file}/demo/cxx/ocr/test_lite
cp test_tipc/configs/ppocr_det_mobile_params.txt test_tipc/test_lite.sh test_tipc/common_func.sh ${paddlelite_file}/demo/cxx/ocr/test_lite
cd ${paddlelite_file}/demo/cxx/ocr/
git clone https://github.com/LDOUBLEV/AutoLog.git
unset http_proxy
unset https_proxy
make -j
sleep 1
make -j
cp ocr_db_crnn test_lite && cp test_lite/libpaddle_light_api_shared.so test_lite/libc++_shared.so
tar -cf test_lite.tar ./test_lite && cp test_lite.tar ${current_dir} && cd ${current_dir}
fi
if [ ${MODE} = "paddle2onnx_infer" ];then if [ ${MODE} = "paddle2onnx_infer" ];then
# prepare serving env # prepare serving env
python_name=$(func_parser_value "${lines[2]}") python_name=$(func_parser_value "${lines[2]}")
......
#!/bin/bash
source ./test_tipc/common_func.sh
FILENAME=$1
dataline=$(cat ${FILENAME})
# parser params
IFS=$'\n'
lines=(${dataline})
IFS=$'\n'
lite_model_list=$(func_parser_value "${lines[2]}")
# prepare lite .nb model
pip install paddlelite==2.9
current_dir=${PWD}
IFS="|"
model_path=./inference_models
for model in ${lite_model_list[*]}; do
inference_model_url=https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/${model}.tar
inference_model=${inference_model_url##*/}
wget -nc -P ${model_path} ${inference_model_url}
cd ${model_path} && tar -xf ${inference_model} && cd ../
model_dir=${model_path}/${inference_model%.*}
model_file=${model_dir}/inference.pdmodel
param_file=${model_dir}/inference.pdiparams
paddle_lite_opt --model_dir=${model_dir} --model_file=${model_file} --param_file=${param_file} --valid_targets=arm --optimize_out=${model_dir}_opt
done
# prepare test data
data_url=https://paddleocr.bj.bcebos.com/dygraph_v2.0/test/icdar2015_lite.tar
model_path=./inference_models
inference_model=${inference_model_url##*/}
data_file=${data_url##*/}
wget -nc -P ./inference_models ${inference_model_url}
wget -nc -P ./test_data ${data_url}
cd ./inference_models && tar -xf ${inference_model} && cd ../
cd ./test_data && tar -xf ${data_file} && rm ${data_file} && cd ../
# prepare lite env
paddlelite_url=https://github.com/PaddlePaddle/Paddle-Lite/releases/download/v2.9/inference_lite_lib.android.armv8.gcc.c++_shared.with_extra.with_cv.tar.gz
paddlelite_zipfile=$(echo $paddlelite_url | awk -F "/" '{print $NF}')
paddlelite_file=${paddlelite_zipfile:0:66}
wget ${paddlelite_url} && tar -xf ${paddlelite_zipfile}
mkdir -p ${paddlelite_file}/demo/cxx/ocr/test_lite
cp -r ${model_path}/*_opt.nb test_data ${paddlelite_file}/demo/cxx/ocr/test_lite
cp ppocr/utils/ppocr_keys_v1.txt deploy/lite/config.txt ${paddlelite_file}/demo/cxx/ocr/test_lite
cp -r ./deploy/lite/* ${paddlelite_file}/demo/cxx/ocr/
cp ${paddlelite_file}/cxx/lib/libpaddle_light_api_shared.so ${paddlelite_file}/demo/cxx/ocr/test_lite
cp ${FILENAME} test_tipc/test_lite_arm_cpu_cpp.sh test_tipc/common_func.sh ${paddlelite_file}/demo/cxx/ocr/test_lite
cd ${paddlelite_file}/demo/cxx/ocr/
git clone https://github.com/cuicheng01/AutoLog.git
make -j
sleep 1
make -j
cp ocr_db_crnn test_lite && cp test_lite/libpaddle_light_api_shared.so test_lite/libc++_shared.so
tar -cf test_lite.tar ./test_lite && cp test_lite.tar ${current_dir} && cd ${current_dir}
rm -rf ${paddlelite_file}* && rm -rf ${model_path}
...@@ -60,16 +60,20 @@ ...@@ -60,16 +60,20 @@
```shell ```shell
test_tipc/ test_tipc/
├── configs/ # 配置文件目录 ├── configs/ # 配置文件目录
├── det_mv3_db.yml # 测试mobile版ppocr检测模型训练的yml文件 ├── ppocr_det_mobile # ppocr_det_mobile模型的测试配置文件目录
├── det_r50_vd_db.yml # 测试server版ppocr检测模型训练的yml文件 ├── det_mv3_db.yml # 测试mobile版ppocr检测模型训练的yml文件
├── rec_icdar15_r34_train.yml # 测试server版ppocr识别模型训练的yml文件 ├── train_infer_python.txt.txt # 测试Linux上python训练预测(基础训练预测)的配置文件
├── ppocr_sys_mobile_params.txt # 测试mobile版ppocr检测+识别模型串联的参数配置文件 ├── model_linux_gpu_normal_normal_infer_cpp_linux_gpu_cpu.txt # 测试Linux上c++预测的配置文件
├── ppocr_det_mobile_params.txt # 测试mobile版ppocr检测模型的参数配置文件 ├── model_linux_gpu_normal_normal_infer_python_jetson.txt # 测试Jetson上python预测的配置文件
├── ppocr_rec_mobile_params.txt # 测试mobile版ppocr识别模型的参数配置文件 ├── train_linux_gpu_fleet_amp_infer_python_linux_gpu_cpu.txt # 测试Linux上多机多卡、混合精度训练和python预测的配置文件
├── ppocr_sys_server_params.txt # 测试server版ppocr检测+识别模型串联的参数配置文件 ├── ...
├── ppocr_det_server_params.txt # 测试server版ppocr检测模型的参数配置文件 ├── ppocr_det_server # ppocr_det_server模型的测试配置文件目录
├── ppocr_rec_server_params.txt # 测试server版ppocr识别模型的参数配置文件 ├── ...
├── ... ├── ppocr_rec_mobile # ppocr_rec_mobile模型的测试配置文件目录
├── ...
├── ppocr_rec_server # ppocr_rec_server模型的测试配置文件目录
├── ...
├── ...
├── results/ # 预先保存的预测结果,用于和实际预测结果进行精读比对 ├── results/ # 预先保存的预测结果,用于和实际预测结果进行精读比对
├── python_ppocr_det_mobile_results_fp32.txt # 预存的mobile版ppocr检测模型python预测fp32精度的结果 ├── python_ppocr_det_mobile_results_fp32.txt # 预存的mobile版ppocr检测模型python预测fp32精度的结果
├── python_ppocr_det_mobile_results_fp16.txt # 预存的mobile版ppocr检测模型python预测fp16精度的结果 ├── python_ppocr_det_mobile_results_fp16.txt # 预存的mobile版ppocr检测模型python预测fp16精度的结果
...@@ -80,11 +84,22 @@ test_tipc/ ...@@ -80,11 +84,22 @@ test_tipc/
├── test_train_inference_python.sh # 测试python训练预测的主程序 ├── test_train_inference_python.sh # 测试python训练预测的主程序
├── test_inference_cpp.sh # 测试c++预测的主程序 ├── test_inference_cpp.sh # 测试c++预测的主程序
├── test_serving.sh # 测试serving部署预测的主程序 ├── test_serving.sh # 测试serving部署预测的主程序
├── test_lite.sh # 测试lite部署预测的主程序 ├── test_lite_arm_cpu_cpp.sh # 测试lite在arm_cpu上部署的C++预测的主程序
├── compare_results.py # 用于对比log中的预测结果与results中的预存结果精度误差是否在限定范围内 ├── compare_results.py # 用于对比log中的预测结果与results中的预存结果精度误差是否在限定范围内
└── readme.md # 使用文档 └── readme.md # 使用文档
``` ```
### 配置文件命名规范
`configs`目录下,按模型名称划分为子目录,子目录中存放所有该模型测试需要用到的配置文件,配置文件的命名遵循如下规范:
1. 基础训练预测配置简单命名为:`train_infer_python.txt`,表示**Linux环境下单机、不使用混合精度训练+python预测**,其完整命名对应`train_linux_gpu_fleet_amp_infer_python_linux_gpu_cpu.txt`,由于本配置文件使用频率较高,这里进行了名称简化。
2. 其他带训练配置命名格式为:`train_训练硬件环境(linux_gpu/linux_dcu/…)_是否多机(fleet/normal)_是否混合精度(amp/normal)_预测模式(infer/lite/serving/js)_语言(cpp/python/java)_预测硬件环境(linux_gpu/mac/jetson/opencl_arm_gpu/...).txt`。如,linux gpu下多机多卡+混合精度链条测试对应配置 `train_linux_gpu_fleet_amp_infer_python_linux_gpu_cpu.txt`,linux dcu下基础训练预测对应配置 `train_linux_dcu_normal_normal_infer_python_dcu.txt`
3. 仅预测的配置(如serving、lite等)命名格式:`model_训练硬件环境(linux_gpu/linux_dcu/…)_是否多机(fleet/normal)_是否混合精度(amp/normal)_(infer/lite/serving/js)_语言(cpp/python/java)_预测硬件环境(linux_gpu/mac/jetson/opencl_arm_gpu/...).txt`,即,与2相比,仅第一个字段从train换为model,测试时模型直接下载获取,这里的“训练硬件环境”表示所测试的模型是在哪种环境下训练得到的。
根据上述命名规范,可以直接从配置文件名看出对应的测试场景和功能。
### 测试流程 ### 测试流程
使用本工具,可以测试不同功能的支持情况,以及预测结果是否对齐,测试流程如下: 使用本工具,可以测试不同功能的支持情况,以及预测结果是否对齐,测试流程如下:
<div align="center"> <div align="center">
...@@ -99,7 +114,8 @@ test_tipc/ ...@@ -99,7 +114,8 @@ test_tipc/
- `test_train_inference_python.sh`:测试基于Python的模型训练、评估、推理等基本功能,包括裁剪、量化、蒸馏。 - `test_train_inference_python.sh`:测试基于Python的模型训练、评估、推理等基本功能,包括裁剪、量化、蒸馏。
- `test_inference_cpp.sh`:测试基于C++的模型推理。 - `test_inference_cpp.sh`:测试基于C++的模型推理。
- `test_serving.sh`:测试基于Paddle Serving的服务化部署功能。 - `test_serving.sh`:测试基于Paddle Serving的服务化部署功能。
- `test_lite.sh`:测试基于Paddle-Lite的端侧预测部署功能。 - `test_lite_arm_cpu_cpp.sh`:测试基于Paddle-Lite的ARM CPU端c++预测部署功能。
- `test_paddle2onnx.sh`:测试Paddle2ONNX的模型转化功能,并验证正确性。
<a name="more"></a> <a name="more"></a>
#### 更多教程 #### 更多教程
...@@ -107,4 +123,5 @@ test_tipc/ ...@@ -107,4 +123,5 @@ test_tipc/
[test_train_inference_python 使用](docs/test_train_inference_python.md) [test_train_inference_python 使用](docs/test_train_inference_python.md)
[test_inference_cpp 使用](docs/test_inference_cpp.md) [test_inference_cpp 使用](docs/test_inference_cpp.md)
[test_serving 使用](docs/test_serving.md) [test_serving 使用](docs/test_serving.md)
[test_lite 使用](docs/test_lite.md) [test_lite_arm_cpu_cpp 使用](docs/test_lite_arm_cpu_cpp.md)
[test_paddle2onnx 使用](docs/test_paddle2onnx.md)
...@@ -2,38 +2,38 @@ ...@@ -2,38 +2,38 @@
source test_tipc/common_func.sh source test_tipc/common_func.sh
FILENAME=$1 FILENAME=$1
dataline=$(awk 'NR==52, NR==66{print}' $FILENAME) dataline=$(awk 'NR==1, NR==16{print}' $FILENAME)
# parser params # parser params
IFS=$'\n' IFS=$'\n'
lines=(${dataline}) lines=(${dataline})
# parser cpp inference model # parser cpp inference model
use_opencv=$(func_parser_value "${lines[1]}") model_name=$(func_parser_value "${lines[1]}")
cpp_infer_model_dir_list=$(func_parser_value "${lines[2]}") use_opencv=$(func_parser_value "${lines[2]}")
cpp_infer_is_quant=$(func_parser_value "${lines[3]}") cpp_infer_model_dir_list=$(func_parser_value "${lines[3]}")
cpp_infer_is_quant=$(func_parser_value "${lines[4]}")
# parser cpp inference # parser cpp inference
inference_cmd=$(func_parser_value "${lines[4]}") inference_cmd=$(func_parser_value "${lines[5]}")
cpp_use_gpu_key=$(func_parser_key "${lines[5]}") cpp_use_gpu_key=$(func_parser_key "${lines[6]}")
cpp_use_gpu_list=$(func_parser_value "${lines[5]}") cpp_use_gpu_list=$(func_parser_value "${lines[6]}")
cpp_use_mkldnn_key=$(func_parser_key "${lines[6]}") cpp_use_mkldnn_key=$(func_parser_key "${lines[7]}")
cpp_use_mkldnn_list=$(func_parser_value "${lines[6]}") cpp_use_mkldnn_list=$(func_parser_value "${lines[7]}")
cpp_cpu_threads_key=$(func_parser_key "${lines[7]}") cpp_cpu_threads_key=$(func_parser_key "${lines[8]}")
cpp_cpu_threads_list=$(func_parser_value "${lines[7]}") cpp_cpu_threads_list=$(func_parser_value "${lines[8]}")
cpp_batch_size_key=$(func_parser_key "${lines[8]}") cpp_batch_size_key=$(func_parser_key "${lines[9]}")
cpp_batch_size_list=$(func_parser_value "${lines[8]}") cpp_batch_size_list=$(func_parser_value "${lines[9]}")
cpp_use_trt_key=$(func_parser_key "${lines[9]}") cpp_use_trt_key=$(func_parser_key "${lines[10]}")
cpp_use_trt_list=$(func_parser_value "${lines[9]}") cpp_use_trt_list=$(func_parser_value "${lines[10]}")
cpp_precision_key=$(func_parser_key "${lines[10]}") cpp_precision_key=$(func_parser_key "${lines[11]}")
cpp_precision_list=$(func_parser_value "${lines[10]}") cpp_precision_list=$(func_parser_value "${lines[11]}")
cpp_infer_model_key=$(func_parser_key "${lines[11]}") cpp_infer_model_key=$(func_parser_key "${lines[12]}")
cpp_image_dir_key=$(func_parser_key "${lines[12]}") cpp_image_dir_key=$(func_parser_key "${lines[13]}")
cpp_infer_img_dir=$(func_parser_value "${lines[12]}") cpp_infer_img_dir=$(func_parser_value "${lines[13]}")
cpp_infer_key1=$(func_parser_key "${lines[13]}") cpp_infer_key1=$(func_parser_key "${lines[14]}")
cpp_infer_value1=$(func_parser_value "${lines[13]}") cpp_infer_value1=$(func_parser_value "${lines[14]}")
cpp_benchmark_key=$(func_parser_key "${lines[14]}") cpp_benchmark_key=$(func_parser_key "${lines[15]}")
cpp_benchmark_value=$(func_parser_value "${lines[14]}") cpp_benchmark_value=$(func_parser_value "${lines[15]}")
LOG_PATH="./test_tipc/output" LOG_PATH="./test_tipc/output"
mkdir -p ${LOG_PATH} mkdir -p ${LOG_PATH}
......
...@@ -3,8 +3,7 @@ source ./common_func.sh ...@@ -3,8 +3,7 @@ source ./common_func.sh
export LD_LIBRARY_PATH=${PWD}:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=${PWD}:$LD_LIBRARY_PATH
FILENAME=$1 FILENAME=$1
dataline=$(awk 'NR==102, NR==111{print}' $FILENAME) dataline=$(cat $FILENAME)
echo $dataline
# parser params # parser params
IFS=$'\n' IFS=$'\n'
lines=(${dataline}) lines=(${dataline})
...@@ -12,13 +11,14 @@ lines=(${dataline}) ...@@ -12,13 +11,14 @@ lines=(${dataline})
# parser lite inference # parser lite inference
lite_inference_cmd=$(func_parser_value "${lines[1]}") lite_inference_cmd=$(func_parser_value "${lines[1]}")
lite_model_dir_list=$(func_parser_value "${lines[2]}") lite_model_dir_list=$(func_parser_value "${lines[2]}")
lite_cpu_threads_list=$(func_parser_value "${lines[3]}") runtime_device=$(func_parser_value "${lines[3]}")
lite_batch_size_list=$(func_parser_value "${lines[4]}") lite_cpu_threads_list=$(func_parser_value "${lines[4]}")
lite_power_mode_list=$(func_parser_value "${lines[5]}") lite_batch_size_list=$(func_parser_value "${lines[5]}")
lite_infer_img_dir_list=$(func_parser_value "${lines[6]}") lite_infer_img_dir_list=$(func_parser_value "${lines[8]}")
lite_config_dir=$(func_parser_value "${lines[7]}") lite_config_dir=$(func_parser_value "${lines[9]}")
lite_rec_dict_dir=$(func_parser_value "${lines[8]}") lite_rec_dict_dir=$(func_parser_value "${lines[10]}")
lite_benchmark_value=$(func_parser_value "${lines[9]}") lite_benchmark_value=$(func_parser_value "${lines[11]}")
LOG_PATH="./output" LOG_PATH="./output"
mkdir -p ${LOG_PATH} mkdir -p ${LOG_PATH}
...@@ -37,23 +37,14 @@ function func_lite(){ ...@@ -37,23 +37,14 @@ function func_lite(){
else else
precision="FP32" precision="FP32"
fi fi
is_single_img=$(echo $_img_dir | grep -E ".jpg|.jpeg|.png|.JPEG|.JPG")
if [[ "$is_single_img" != "" ]]; then
single_img="True"
else
single_img="False"
fi
# lite inference # lite inference
for num_threads in ${lite_cpu_threads_list[*]}; do for num_threads in ${lite_cpu_threads_list[*]}; do
for power_mode in ${lite_power_mode_list[*]}; do for batchsize in ${lite_batch_size_list[*]}; do
for batchsize in ${lite_batch_size_list[*]}; do _save_log_path="${_log_path}/lite_${_lite_model}_runtime_device_${runtime_device}_precision_${precision}_batchsize_${batchsize}_threads_${num_threads}.log"
model_name=$(echo $lite_model | awk -F "/" '{print $NF}') command="${_script} ${_lite_model} ${runtime_device} ${precision} ${num_threads} ${batchsize} ${_img_dir} ${_config} ${lite_benchmark_value} > ${_save_log_path} 2>&1"
_save_log_path="${_log_path}/lite_${model_name}_precision_${precision}_batchsize_${batchsize}_threads_${num_threads}_powermode_${power_mode}_singleimg_${single_img}.log" eval ${command}
command="${_script} ${lite_model} ${precision} ${num_threads} ${batchsize} ${power_mode} ${_img_dir} ${_config} ${lite_benchmark_value} > ${_save_log_path} 2>&1" status_check $? "${command}" "${status_log}"
eval ${command}
status_check $? "${command}" "${status_log}"
done
done done
done done
} }
...@@ -64,6 +55,6 @@ IFS="|" ...@@ -64,6 +55,6 @@ IFS="|"
for lite_model in ${lite_model_dir_list[*]}; do for lite_model in ${lite_model_dir_list[*]}; do
#run lite inference #run lite inference
for img_dir in ${lite_infer_img_dir_list[*]}; do for img_dir in ${lite_infer_img_dir_list[*]}; do
func_lite "${lite_inference_cmd}" "${lite_model}" "${LOG_PATH}" "${img_dir}" "${lite_config_dir}" func_lite "${lite_inference_cmd}" "${lite_model}_opt.nb" "${LOG_PATH}" "${img_dir}" "${lite_config_dir}"
done done
done done
...@@ -27,7 +27,7 @@ from ppocr.data import build_dataloader ...@@ -27,7 +27,7 @@ from ppocr.data import build_dataloader
from ppocr.modeling.architectures import build_model from ppocr.modeling.architectures import build_model
from ppocr.postprocess import build_post_process from ppocr.postprocess import build_post_process
from ppocr.metrics import build_metric from ppocr.metrics import build_metric
from ppocr.utils.save_load import init_model, load_dygraph_params from ppocr.utils.save_load import load_model
from ppocr.utils.utility import print_dict from ppocr.utils.utility import print_dict
import tools.program as program import tools.program as program
...@@ -60,7 +60,7 @@ def main(): ...@@ -60,7 +60,7 @@ def main():
else: else:
model_type = None model_type = None
best_model_dict = load_dygraph_params(config, model, logger, None) best_model_dict = load_model(config, model)
if len(best_model_dict): if len(best_model_dict):
logger.info('metric in ckpt ***************') logger.info('metric in ckpt ***************')
for k, v in best_model_dict.items(): for k, v in best_model_dict.items():
......
...@@ -27,7 +27,7 @@ sys.path.append(os.path.abspath(os.path.join(__dir__, '..'))) ...@@ -27,7 +27,7 @@ sys.path.append(os.path.abspath(os.path.join(__dir__, '..')))
from ppocr.data import build_dataloader from ppocr.data import build_dataloader
from ppocr.modeling.architectures import build_model from ppocr.modeling.architectures import build_model
from ppocr.postprocess import build_post_process from ppocr.postprocess import build_post_process
from ppocr.utils.save_load import init_model, load_dygraph_params from ppocr.utils.save_load import load_model
from ppocr.utils.utility import print_dict from ppocr.utils.utility import print_dict
import tools.program as program import tools.program as program
...@@ -57,7 +57,7 @@ def main(): ...@@ -57,7 +57,7 @@ def main():
model = build_model(config['Architecture']) model = build_model(config['Architecture'])
best_model_dict = load_dygraph_params(config, model, logger, None) best_model_dict = load_model(config, model)
if len(best_model_dict): if len(best_model_dict):
logger.info('metric in ckpt ***************') logger.info('metric in ckpt ***************')
for k, v in best_model_dict.items(): for k, v in best_model_dict.items():
......
...@@ -26,7 +26,7 @@ from paddle.jit import to_static ...@@ -26,7 +26,7 @@ from paddle.jit import to_static
from ppocr.modeling.architectures import build_model from ppocr.modeling.architectures import build_model
from ppocr.postprocess import build_post_process from ppocr.postprocess import build_post_process
from ppocr.utils.save_load import init_model from ppocr.utils.save_load import load_model
from ppocr.utils.logging import get_logger from ppocr.utils.logging import get_logger
from tools.program import load_config, merge_config, ArgsParser from tools.program import load_config, merge_config, ArgsParser
...@@ -107,7 +107,7 @@ def main(): ...@@ -107,7 +107,7 @@ def main():
else: # base rec model else: # base rec model
config["Architecture"]["Head"]["out_channels"] = char_num config["Architecture"]["Head"]["out_channels"] = char_num
model = build_model(config["Architecture"]) model = build_model(config["Architecture"])
init_model(config, model) load_model(config, model)
model.eval() model.eval()
save_path = config["Global"]["save_inference_dir"] save_path = config["Global"]["save_inference_dir"]
......
...@@ -32,7 +32,7 @@ import paddle ...@@ -32,7 +32,7 @@ import paddle
from ppocr.data import create_operators, transform from ppocr.data import create_operators, transform
from ppocr.modeling.architectures import build_model from ppocr.modeling.architectures import build_model
from ppocr.postprocess import build_post_process from ppocr.postprocess import build_post_process
from ppocr.utils.save_load import init_model from ppocr.utils.save_load import load_model
from ppocr.utils.utility import get_image_file_list from ppocr.utils.utility import get_image_file_list
import tools.program as program import tools.program as program
...@@ -47,7 +47,7 @@ def main(): ...@@ -47,7 +47,7 @@ def main():
# build model # build model
model = build_model(config['Architecture']) model = build_model(config['Architecture'])
init_model(config, model) load_model(config, model)
# create data ops # create data ops
transforms = [] transforms = []
......
...@@ -34,7 +34,7 @@ import paddle ...@@ -34,7 +34,7 @@ import paddle
from ppocr.data import create_operators, transform from ppocr.data import create_operators, transform
from ppocr.modeling.architectures import build_model from ppocr.modeling.architectures import build_model
from ppocr.postprocess import build_post_process from ppocr.postprocess import build_post_process
from ppocr.utils.save_load import init_model, load_dygraph_params from ppocr.utils.save_load import load_model
from ppocr.utils.utility import get_image_file_list from ppocr.utils.utility import get_image_file_list
import tools.program as program import tools.program as program
...@@ -59,7 +59,7 @@ def main(): ...@@ -59,7 +59,7 @@ def main():
# build model # build model
model = build_model(config['Architecture']) model = build_model(config['Architecture'])
_ = load_dygraph_params(config, model, logger, None) load_model(config, model)
# build post process # build post process
post_process_class = build_post_process(config['PostProcess']) post_process_class = build_post_process(config['PostProcess'])
......
...@@ -34,7 +34,7 @@ import paddle ...@@ -34,7 +34,7 @@ import paddle
from ppocr.data import create_operators, transform from ppocr.data import create_operators, transform
from ppocr.modeling.architectures import build_model from ppocr.modeling.architectures import build_model
from ppocr.postprocess import build_post_process from ppocr.postprocess import build_post_process
from ppocr.utils.save_load import init_model from ppocr.utils.save_load import load_model
from ppocr.utils.utility import get_image_file_list from ppocr.utils.utility import get_image_file_list
import tools.program as program import tools.program as program
...@@ -68,7 +68,7 @@ def main(): ...@@ -68,7 +68,7 @@ def main():
# build model # build model
model = build_model(config['Architecture']) model = build_model(config['Architecture'])
init_model(config, model) load_model(config, model)
# build post process # build post process
post_process_class = build_post_process(config['PostProcess'], post_process_class = build_post_process(config['PostProcess'],
......
...@@ -33,7 +33,7 @@ import paddle ...@@ -33,7 +33,7 @@ import paddle
from ppocr.data import create_operators, transform from ppocr.data import create_operators, transform
from ppocr.modeling.architectures import build_model from ppocr.modeling.architectures import build_model
from ppocr.postprocess import build_post_process from ppocr.postprocess import build_post_process
from ppocr.utils.save_load import init_model from ppocr.utils.save_load import load_model
from ppocr.utils.utility import get_image_file_list from ppocr.utils.utility import get_image_file_list
import tools.program as program import tools.program as program
...@@ -58,7 +58,7 @@ def main(): ...@@ -58,7 +58,7 @@ def main():
model = build_model(config['Architecture']) model = build_model(config['Architecture'])
init_model(config, model) load_model(config, model)
# create data ops # create data ops
transforms = [] transforms = []
...@@ -75,9 +75,7 @@ def main(): ...@@ -75,9 +75,7 @@ def main():
'gsrm_slf_attn_bias1', 'gsrm_slf_attn_bias2' 'gsrm_slf_attn_bias1', 'gsrm_slf_attn_bias2'
] ]
elif config['Architecture']['algorithm'] == "SAR": elif config['Architecture']['algorithm'] == "SAR":
op[op_name]['keep_keys'] = [ op[op_name]['keep_keys'] = ['image', 'valid_ratio']
'image', 'valid_ratio'
]
else: else:
op[op_name]['keep_keys'] = ['image'] op[op_name]['keep_keys'] = ['image']
transforms.append(op) transforms.append(op)
......
...@@ -34,11 +34,12 @@ from paddle.jit import to_static ...@@ -34,11 +34,12 @@ from paddle.jit import to_static
from ppocr.data import create_operators, transform from ppocr.data import create_operators, transform
from ppocr.modeling.architectures import build_model from ppocr.modeling.architectures import build_model
from ppocr.postprocess import build_post_process from ppocr.postprocess import build_post_process
from ppocr.utils.save_load import init_model from ppocr.utils.save_load import load_model
from ppocr.utils.utility import get_image_file_list from ppocr.utils.utility import get_image_file_list
import tools.program as program import tools.program as program
import cv2 import cv2
def main(config, device, logger, vdl_writer): def main(config, device, logger, vdl_writer):
global_config = config['Global'] global_config = config['Global']
...@@ -53,7 +54,7 @@ def main(config, device, logger, vdl_writer): ...@@ -53,7 +54,7 @@ def main(config, device, logger, vdl_writer):
model = build_model(config['Architecture']) model = build_model(config['Architecture'])
init_model(config, model, logger) load_model(config, model)
# create data ops # create data ops
transforms = [] transforms = []
...@@ -104,4 +105,3 @@ def main(config, device, logger, vdl_writer): ...@@ -104,4 +105,3 @@ def main(config, device, logger, vdl_writer):
if __name__ == '__main__': if __name__ == '__main__':
config, device, logger, vdl_writer = program.preprocess() config, device, logger, vdl_writer = program.preprocess()
main(config, device, logger, vdl_writer) main(config, device, logger, vdl_writer)
...@@ -35,7 +35,7 @@ from ppocr.losses import build_loss ...@@ -35,7 +35,7 @@ from ppocr.losses import build_loss
from ppocr.optimizer import build_optimizer from ppocr.optimizer import build_optimizer
from ppocr.postprocess import build_post_process from ppocr.postprocess import build_post_process
from ppocr.metrics import build_metric from ppocr.metrics import build_metric
from ppocr.utils.save_load import init_model, load_dygraph_params from ppocr.utils.save_load import load_model
import tools.program as program import tools.program as program
dist.get_world_size() dist.get_world_size()
...@@ -97,7 +97,7 @@ def main(config, device, logger, vdl_writer): ...@@ -97,7 +97,7 @@ def main(config, device, logger, vdl_writer):
# build metric # build metric
eval_class = build_metric(config['Metric']) eval_class = build_metric(config['Metric'])
# load pretrain model # load pretrain model
pre_best_model_dict = load_dygraph_params(config, model, logger, optimizer) pre_best_model_dict = load_model(config, model, optimizer)
logger.info('train dataloader has {} iters'.format(len(train_dataloader))) logger.info('train dataloader has {} iters'.format(len(train_dataloader)))
if valid_dataloader is not None: if valid_dataloader is not None:
logger.info('valid dataloader has {} iters'.format( logger.info('valid dataloader has {} iters'.format(
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment