Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
ModelZoo
PaddleOCR_onnxruntime
Commits
3859b205
Commit
3859b205
authored
Aug 09, 2023
by
yangql
Browse files
Update README.md, Python/paddleocr.py, CMakeLists.txt files
parent
5713e0ca
Pipeline
#502
failed with stages
in 0 seconds
Changes
3
Pipelines
1
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
17 additions
and
17 deletions
+17
-17
CMakeLists.txt
CMakeLists.txt
+2
-2
Python/paddleocr.py
Python/paddleocr.py
+0
-0
README.md
README.md
+15
-15
No files found.
CMakeLists.txt
View file @
3859b205
...
@@ -2,7 +2,7 @@
...
@@ -2,7 +2,7 @@
cmake_minimum_required
(
VERSION 3.5
)
cmake_minimum_required
(
VERSION 3.5
)
# 设置项目名
# 设置项目名
project
(
RapidOcrOnnx
)
project
(
PaddleOCR_Ort
)
# 设置编译器
# 设置编译器
set
(
CMAKE_CXX_COMPILER g++
)
set
(
CMAKE_CXX_COMPILER g++
)
...
@@ -43,4 +43,4 @@ set(SOURCE_FILES ${CMAKE_CURRENT_SOURCE_DIR}/Src/main.cpp
...
@@ -43,4 +43,4 @@ set(SOURCE_FILES ${CMAKE_CURRENT_SOURCE_DIR}/Src/main.cpp
)
)
# 添加可执行目标
# 添加可执行目标
add_executable
(
RapidOcr
${
SOURCE_FILES
}
)
add_executable
(
PaddleOCR
${
SOURCE_FILES
}
)
Python/
rapid
ocr.py
→
Python/
paddle
ocr.py
View file @
3859b205
File moved
README.md
View file @
3859b205
#
Rapid
OCR
#
Paddle
OCR
## 模型介绍
## 模型介绍
目前已知运行速度最快、支持最广,完全开源免费并支持离线快速部署的多平台多语言OCR。。
目前已知运行速度最快、支持最广,完全开源免费并支持离线快速部署的多平台多语言OCR。。
## 模型结构
## 模型结构
Rapid
OCR使用ch_PP-OCRv3_det + ch_ppocr_mobile_v2.0_cls + ch_PP-OCRv3_rec三个模型进行图像中的文本识别。
Paddle
OCR使用ch_PP-OCRv3_det + ch_ppocr_mobile_v2.0_cls + ch_PP-OCRv3_rec三个模型进行图像中的文本识别。
## Python版本推理
## Python版本推理
本次采用
Rapid
OCR模型基于ONNXRuntime推理框架进行图像文本识别,模型文件下载链接:https://pan.baidu.com/s/1uGHhimKLb5k5f9xaFmNBwQ , 提取码:ggvz ,并将ch_PP-OCRv3_det_infer.onnx、ch_ppocr_mobile_v2.0_cls_infer.onnx、ch_PP-OCRv3_rec_infer.onnx模型文件保存在Resource/Models文件夹下。下面介绍如何运行python代码示例,Python示例的详细说明见Doc目录下的Tutorial_Python.md。
本次采用
Paddle
OCR模型基于ONNXRuntime推理框架进行图像文本识别,模型文件下载链接:https://pan.baidu.com/s/1uGHhimKLb5k5f9xaFmNBwQ , 提取码:ggvz ,并将ch_PP-OCRv3_det_infer.onnx、ch_ppocr_mobile_v2.0_cls_infer.onnx、ch_PP-OCRv3_rec_infer.onnx模型文件保存在Resource/Models文件夹下。下面介绍如何运行python代码示例,Python示例的详细说明见Doc目录下的Tutorial_Python.md。
### 下载镜像
### 下载镜像
...
@@ -27,8 +27,8 @@ export PYTHONPATH=/opt/dtk/lib:$PYTHONPATH
...
@@ -27,8 +27,8 @@ export PYTHONPATH=/opt/dtk/lib:$PYTHONPATH
### 安装依赖
### 安装依赖
```
python
```
python
# 进入
rapid
ocr ort工程根目录
# 进入
paddle
ocr ort工程根目录
cd
<
path_to_
rapid
ocr_ort
>
cd
<
path_to_
paddle
ocr_ort
>
# 进入示例程序目录
# 进入示例程序目录
cd
Python
/
cd
Python
/
...
@@ -40,10 +40,10 @@ pip install -r requirements.txt
...
@@ -40,10 +40,10 @@ pip install -r requirements.txt
### 运行示例
### 运行示例
```
python
```
python
python
rapid
ocr
.
py
python
paddle
ocr
.
py
```
```
如下所示,通过输入图像,
RapidOcr
模型可以识别出文字和文本框。
如下所示,通过输入图像,
PaddleOCR
模型可以识别出文字和文本框。
```
```
[[[[245.0, 9.0], [554.0, 8.0], [554.0, 27.0], [245.0, 28.0]], '人生活的真实写照:善有善报,恶有恶报。', '0.9306996673345566'], [[[9.0, 49.0], [522.0, 50.0], [522.0, 69.0], [9.0, 68.0]], '我们中国人有一句俗语说:“种瓜得瓜,种豆得豆。”而这就是每个', '0.9294075581335253'], [[[84.0, 105.0], [555.0, 104.0], [555.0, 125.0], [85.0, 127.0]], "every man's life: good begets good, and evil leads to evil.", '0.8932319914301237'], [[[28.0, 147.0], [556.0, 146.0], [556.0, 168.0], [28.0, 169.0]], 'melons; if he sows beans, he will reap beans." And this is true of', '0.900923888185131'], [[[0.0, 185.0], [524.0, 188.0], [524.0, 212.0], [0.0, 209.0]], 'We Chinese have a saying:"If a man plants melons, he will reap', '0.9216671202863965'], [[[295.0, 248.0], [553.0, 248.0], [553.0, 264.0], [295.0, 264.0]], '它不仅适用于今生,也适用于来世。', '0.927988795673146'], [[[14.0, 289.0], [554.0, 290.0], [554.0, 307.0], [14.0, 306.0]], '一每一个行为都有一种结果。在我看来,这种想法是全宇宙的道德基础;', '0.88565122719967'], [[[9.0, 330.0], [521.0, 330.0], [521.0, 349.0], [9.0, 349.0]], '假如说过去的日子曾经教给我们一些什么的话,那就是有因必有果一', '0.9162070232052957'], [[[343.0, 388.0], [555.0, 388.0], [555.0, 405.0], [343.0, 405.0]], 'in this world and the next.', '0.8764956444501877'], [[[15.0, 426.0], [554.0, 426.0], [554.0, 448.0], [15.0, 448.0]], 'opinion, is the moral foundation of the universe; it applies equally', '0.9183026262815448'], [[[62.0, 466.0], [556.0, 468.0], [556.0, 492.0], [62.0, 490.0]], 'effect - every action has a consequence. This thought, in my', '0.9308378403304053']]
[[[[245.0, 9.0], [554.0, 8.0], [554.0, 27.0], [245.0, 28.0]], '人生活的真实写照:善有善报,恶有恶报。', '0.9306996673345566'], [[[9.0, 49.0], [522.0, 50.0], [522.0, 69.0], [9.0, 68.0]], '我们中国人有一句俗语说:“种瓜得瓜,种豆得豆。”而这就是每个', '0.9294075581335253'], [[[84.0, 105.0], [555.0, 104.0], [555.0, 125.0], [85.0, 127.0]], "every man's life: good begets good, and evil leads to evil.", '0.8932319914301237'], [[[28.0, 147.0], [556.0, 146.0], [556.0, 168.0], [28.0, 169.0]], 'melons; if he sows beans, he will reap beans." And this is true of', '0.900923888185131'], [[[0.0, 185.0], [524.0, 188.0], [524.0, 212.0], [0.0, 209.0]], 'We Chinese have a saying:"If a man plants melons, he will reap', '0.9216671202863965'], [[[295.0, 248.0], [553.0, 248.0], [553.0, 264.0], [295.0, 264.0]], '它不仅适用于今生,也适用于来世。', '0.927988795673146'], [[[14.0, 289.0], [554.0, 290.0], [554.0, 307.0], [14.0, 306.0]], '一每一个行为都有一种结果。在我看来,这种想法是全宇宙的道德基础;', '0.88565122719967'], [[[9.0, 330.0], [521.0, 330.0], [521.0, 349.0], [9.0, 349.0]], '假如说过去的日子曾经教给我们一些什么的话,那就是有因必有果一', '0.9162070232052957'], [[[343.0, 388.0], [555.0, 388.0], [555.0, 405.0], [343.0, 405.0]], 'in this world and the next.', '0.8764956444501877'], [[[15.0, 426.0], [554.0, 426.0], [554.0, 448.0], [15.0, 448.0]], 'opinion, is the moral foundation of the universe; it applies equally', '0.9183026262815448'], [[[62.0, 466.0], [556.0, 468.0], [556.0, 492.0], [62.0, 490.0]], 'effect - every action has a consequence. This thought, in my', '0.9308378403304053']]
...
@@ -51,7 +51,7 @@ python rapidocr.py
...
@@ -51,7 +51,7 @@ python rapidocr.py
## C++版本推理
## C++版本推理
本次采用
Rapid
OCR模型基于ONNXRuntime推理框架进行图像文本识别,模型文件下载链接:https://pan.baidu.com/s/1uGHhimKLb5k5f9xaFmNBwQ , 提取码:ggvz ,并将ch_PP-OCRv3_det_infer.onnx、ch_ppocr_mobile_v2.0_cls_infer.onnx、ch_PP-OCRv3_rec_infer.onnx模型文件保存在Resource/Models文件夹下。下面介绍如何运行python代码示例,Python示例的详细说明见Doc目录下的Tutorial_Cpp.md。
本次采用
Paddle
OCR模型基于ONNXRuntime推理框架进行图像文本识别,模型文件下载链接:https://pan.baidu.com/s/1uGHhimKLb5k5f9xaFmNBwQ , 提取码:ggvz ,并将ch_PP-OCRv3_det_infer.onnx、ch_ppocr_mobile_v2.0_cls_infer.onnx、ch_PP-OCRv3_rec_infer.onnx模型文件保存在Resource/Models文件夹下。下面介绍如何运行python代码示例,Python示例的详细说明见Doc目录下的Tutorial_Cpp.md。
### 下载镜像
### 下载镜像
...
@@ -70,7 +70,7 @@ rbuild build -d depend
...
@@ -70,7 +70,7 @@ rbuild build -d depend
将依赖库依赖加入环境变量LD_LIBRARY_PATH,在~/.bashrc中添加如下语句:
将依赖库依赖加入环境变量LD_LIBRARY_PATH,在~/.bashrc中添加如下语句:
```
```
export LD_LIBRARY_PATH=<path_to_
rapid
ocr_ort>/depend/lib64/:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=<path_to_
paddle
ocr_ort>/depend/lib64/:$LD_LIBRARY_PATH
```
```
然后执行:
然后执行:
...
@@ -83,17 +83,17 @@ source /opt/dtk/env.sh
...
@@ -83,17 +83,17 @@ source /opt/dtk/env.sh
### 运行示例
### 运行示例
```
cpp
```
cpp
# 进入
rapid
ocr ort工程根目录
# 进入
paddle
ocr ort工程根目录
cd
<
path_to_
rapid
ocr_ort
>
cd
<
path_to_
paddle
ocr_ort
>
# 进入build目录
# 进入build目录
cd
build
/
cd
build
/
# 执行示例程序
# 执行示例程序
.
/
RapidOcr
.
/
PaddleOCR
```
```
如下所示,通过输入图像,
RapidOcr
模型可以识别出文字和文本框,结果保存在/Resource/Images/文件夹中。
如下所示,通过输入图像,
PaddleOCR
模型可以识别出文字和文本框,结果保存在/Resource/Images/文件夹中。
```
```
TextBox[0](+padding)[score(0.711119),[x: 293, y: 58], [x: 604, y: 58], [x: 604, y: 79], [x: 293, y: 79]]
TextBox[0](+padding)[score(0.711119),[x: 293, y: 58], [x: 604, y: 58], [x: 604, y: 79], [x: 293, y: 79]]
...
@@ -123,10 +123,10 @@ crnnTime[11](38.051758ms)
...
@@ -123,10 +123,10 @@ crnnTime[11](38.051758ms)
## 源码仓库及问题反馈
## 源码仓库及问题反馈
https://developer.hpccube.com/codes/modelzoo/
rapid
ocr_ort
https://developer.hpccube.com/codes/modelzoo/
paddle
ocr_ort
## 参考
## 参考
https://github.com/RapidAI/RapidOCR
https://github.com/RapidAI/RapidOCR
https://github.com/RapidAI/RapidOcrOnnx
https://github.com/RapidAI/RapidOcrOnnx
\ No newline at end of file
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment