README.md 10.4 KB
Newer Older
LDOUBLEV's avatar
LDOUBLEV committed
1
2
3
4
# OCR Pipeline WebService

(English|[简体中文](./README_CN.md))

LDOUBLEV's avatar
LDOUBLEV committed
5
PaddleOCR provides two service deployment methods:
LDOUBLEV's avatar
LDOUBLEV committed
6
7
- Based on **PaddleHub Serving**: Code path is "`./deploy/hubserving`". Please refer to the [tutorial](../../deploy/hubserving/readme_en.md)
- Based on **PaddleServing**: Code path is "`./deploy/pdserving`". Please follow this tutorial.
LDOUBLEV's avatar
LDOUBLEV committed
8

LDOUBLEV's avatar
LDOUBLEV committed
9
10
11
12
13
14
15
16
17
18
# Service deployment based on PaddleServing  

This document will introduce how to use the [PaddleServing](https://github.com/PaddlePaddle/Serving/blob/develop/README.md) to deploy the PPOCR dynamic graph model as a pipeline online service.

Some Key Features of Paddle Serving:
- Integrate with Paddle training pipeline seamlessly, most paddle models can be deployed with one line command.
- Industrial serving features supported, such as models management, online loading, online A/B testing etc.
- Highly concurrent and efficient communication between clients and servers supported.

The introduction and tutorial of Paddle Serving service deployment framework reference [document](https://github.com/PaddlePaddle/Serving/blob/develop/README.md).
LDOUBLEV's avatar
LDOUBLEV committed
19
20
21


## Contents
WenmuZhou's avatar
WenmuZhou committed
22
23
24
25
26
27
28
29
- [OCR Pipeline WebService](#ocr-pipeline-webservice)
- [Service deployment based on PaddleServing](#service-deployment-based-on-paddleserving)
  - [Contents](#contents)
  - [Environmental preparation](#environmental-preparation)
  - [Model conversion](#model-conversion)
  - [Paddle Serving pipeline deployment](#paddle-serving-pipeline-deployment)
  - [WINDOWS Users](#windows-users)
  - [FAQ](#faq)
LDOUBLEV's avatar
LDOUBLEV committed
30

LDOUBLEV's avatar
LDOUBLEV committed
31
<a name="environmental-preparation"></a>
LDOUBLEV's avatar
LDOUBLEV committed
32
33
## Environmental preparation

LDOUBLEV's avatar
LDOUBLEV committed
34
PaddleOCR operating environment and Paddle Serving operating environment are needed.
LDOUBLEV's avatar
LDOUBLEV committed
35

LDOUBLEV's avatar
LDOUBLEV committed
36
1. Please prepare PaddleOCR operating environment reference [link](../../doc/doc_ch/installation.md).
tink2123's avatar
tink2123 committed
37
38
   Download the corresponding paddle whl package according to the environment, it is recommended to install version 2.0.1.

LDOUBLEV's avatar
LDOUBLEV committed
39

LDOUBLEV's avatar
LDOUBLEV committed
40
2. The steps of PaddleServing operating environment prepare are as follows:
LDOUBLEV's avatar
LDOUBLEV committed
41

LDOUBLEV's avatar
LDOUBLEV committed
42
43
    Install serving which used to start the service
    ```
Thomas Young's avatar
Thomas Young committed
44
45
    pip3 install paddle-serving-server==0.6.1 # for CPU
    pip3 install paddle-serving-server-gpu==0.6.1 # for GPU
LDOUBLEV's avatar
LDOUBLEV committed
46
    # Other GPU environments need to confirm the environment and then choose to execute the following commands
Thomas Young's avatar
Thomas Young committed
47
48
    pip3 install paddle-serving-server-gpu==0.6.1.post101 # GPU with CUDA10.1 + TensorRT6
    pip3 install paddle-serving-server-gpu==0.6.1.post11 # GPU with CUDA11 + TensorRT7
LDOUBLEV's avatar
LDOUBLEV committed
49
50
51
52
    ```

3. Install the client to send requests to the service

littletomatodonkey's avatar
littletomatodonkey committed
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
```bash
# 安装serving,用于启动服务
wget https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.7.0.post102-py3-none-any.whl
pip3 install paddle_serving_server_gpu-0.7.0.post102-py3-none-any.whl
# 如果是cuda10.1环境,可以使用下面的命令安装paddle-serving-server
# wget https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_server_gpu-0.7.0.post101-py3-none-any.whl
# pip3 install paddle_serving_server_gpu-0.7.0.post101-py3-none-any.whl

# 安装client,用于向服务发送请求
wget https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_client-0.7.0-cp37-none-any.whl
pip3 install paddle_serving_client-0.7.0-cp37-none-any.whl

# 安装serving-app
wget https://paddle-serving.bj.bcebos.com/test-dev/whl/paddle_serving_app-0.7.0-py3-none-any.whl
pip3 install paddle_serving_app-0.7.0-py3-none-any.whl
```
LDOUBLEV's avatar
LDOUBLEV committed
69

littletomatodonkey's avatar
littletomatodonkey committed
70
   **note:** If you want to install the latest version of PaddleServing, refer to [link](https://github.com/PaddlePaddle/Serving/blob/v0.7.0/doc/Latest_Packages_CN.md).
LDOUBLEV's avatar
LDOUBLEV committed
71
72
73


<a name="model-conversion"></a>
LDOUBLEV's avatar
LDOUBLEV committed
74
75
76
## Model conversion
When using PaddleServing for service deployment, you need to convert the saved inference model into a serving model that is easy to deploy.

littletomatodonkey's avatar
littletomatodonkey committed
77
Firstly, download the [inference model](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.3/README_ch.md#pp-ocr%E7%B3%BB%E5%88%97%E6%A8%A1%E5%9E%8B%E5%88%97%E8%A1%A8%E6%9B%B4%E6%96%B0%E4%B8%AD) of PPOCR
LDOUBLEV's avatar
LDOUBLEV committed
78
79
```
# Download and unzip the OCR text detection model
littletomatodonkey's avatar
littletomatodonkey committed
80
wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_det_infer.tar -O ch_PP-OCRv2_det_infer.tar && tar -xf ch_PP-OCRv2_det_infer.tar
LDOUBLEV's avatar
LDOUBLEV committed
81
# Download and unzip the OCR text recognition model
littletomatodonkey's avatar
littletomatodonkey committed
82
wget https://paddleocr.bj.bcebos.com/PP-OCRv2/chinese/ch_PP-OCRv2_rec_infer.tar -O ch_PP-OCRv2_rec_infer.tar &&  tar -xf ch_PP-OCRv2_rec_infer.tar
LDOUBLEV's avatar
LDOUBLEV committed
83
```
tink2123's avatar
add qps  
tink2123 committed
84
Then, you can use installed paddle_serving_client tool to convert inference model to mobile model.
LDOUBLEV's avatar
LDOUBLEV committed
85
```
LDOUBLEV's avatar
LDOUBLEV committed
86
#  Detection model conversion
littletomatodonkey's avatar
littletomatodonkey committed
87
python3 -m paddle_serving_client.convert --dirname ./ch_PP-OCRv2_det_infer/ \
LDOUBLEV's avatar
LDOUBLEV committed
88
89
                                         --model_filename inference.pdmodel          \
                                         --params_filename inference.pdiparams       \
littletomatodonkey's avatar
littletomatodonkey committed
90
91
                                         --serving_server ./ppocrv2_det_serving/ \
                                         --serving_client ./ppocrv2_det_client/
LDOUBLEV's avatar
LDOUBLEV committed
92

LDOUBLEV's avatar
LDOUBLEV committed
93
#  Recognition model conversion
littletomatodonkey's avatar
littletomatodonkey committed
94
python3 -m paddle_serving_client.convert --dirname ./ch_PP-OCRv2_rec_infer/ \
LDOUBLEV's avatar
LDOUBLEV committed
95
96
                                         --model_filename inference.pdmodel          \
                                         --params_filename inference.pdiparams       \
littletomatodonkey's avatar
littletomatodonkey committed
97
98
                                         --serving_server ./ppocrv2_rec_serving/  \
                                         --serving_client ./ppocrv2_rec_client/
LDOUBLEV's avatar
LDOUBLEV committed
99
100
101

```

tink2123's avatar
add qps  
tink2123 committed
102
After the detection model is converted, there will be additional folders of `ppocr_det_mobile_2.0_serving` and `ppocr_det_mobile_2.0_client` in the current folder, with the following format:
LDOUBLEV's avatar
LDOUBLEV committed
103
```
littletomatodonkey's avatar
littletomatodonkey committed
104
105
106
107
108
109
110
111
112
|- ppocrv2_det_serving/
  |- __model__  
  |- __params__
  |- serving_server_conf.prototxt  
  |- serving_server_conf.stream.prototxt

|- ppocrv2_det_client
  |- serving_client_conf.prototxt  
  |- serving_client_conf.stream.prototxt
LDOUBLEV's avatar
LDOUBLEV committed
113
114
115
116

```
The recognition model is the same.

LDOUBLEV's avatar
LDOUBLEV committed
117
<a name="paddle-serving-pipeline-deployment"></a>
LDOUBLEV's avatar
LDOUBLEV committed
118
119
120
## Paddle Serving pipeline deployment

1. Download the PaddleOCR code, if you have already downloaded it, you can skip this step.
LDOUBLEV's avatar
LDOUBLEV committed
121
122
123
124
    ```
    git clone https://github.com/PaddlePaddle/PaddleOCR

    # Enter the working directory  
tink2123's avatar
tink2123 committed
125
    cd PaddleOCR/deploy/pdserving/
LDOUBLEV's avatar
LDOUBLEV committed
126
127
128
129
130
131
132
133
134
135
    ```

    The pdserver directory contains the code to start the pipeline service and send prediction requests, including:
    ```
    __init__.py
    config.yml # Start the service configuration file
    ocr_reader.py # OCR model pre-processing and post-processing code implementation
    pipeline_http_client.py # Script to send pipeline prediction request
    web_service.py # Start the script of the pipeline server
    ```
LDOUBLEV's avatar
LDOUBLEV committed
136
137

2. Run the following command to start the service.
LDOUBLEV's avatar
LDOUBLEV committed
138
139
140
141
142
143
    ```
    # Start the service and save the running log in log.txt
    python3 web_service.py &>log.txt &
    ```
    After the service is successfully started, a log similar to the following will be printed in log.txt
    ![](./imgs/start_server.png)
LDOUBLEV's avatar
LDOUBLEV committed
144
145

3. Send service request
LDOUBLEV's avatar
LDOUBLEV committed
146
147
148
149
150
    ```
    python3 pipeline_http_client.py
    ```
    After successfully running, the predicted result of the model will be printed in the cmd window. An example of the result is:
    ![](./imgs/results.png)  
LDOUBLEV's avatar
LDOUBLEV committed
151

tink2123's avatar
add qps  
tink2123 committed
152
153
154
155
156
157
158
159
160
161
162
163
164
    Adjust the number of concurrency in config.yml to get the largest QPS. Generally, the number of concurrent detection and recognition is 2:1

    ```
    det:
        concurrency: 8
        ...
    rec:
        concurrency: 4
        ...
    ```

    Multiple service requests can be sent at the same time if necessary.

tink2123's avatar
add qps  
tink2123 committed
165
166
    The predicted performance data will be automatically written into the `PipelineServingLogs/pipeline.tracer` file.

tink2123's avatar
add qps  
tink2123 committed
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
    Tested on 200 real pictures, and limited the detection long side to 960. The average QPS on T4 GPU can reach around 23:

    ```

    2021-05-13 03:42:36,895 ==================== TRACER ======================
    2021-05-13 03:42:36,975 Op(rec):
    2021-05-13 03:42:36,976         in[14.472382882882883 ms]
    2021-05-13 03:42:36,976         prep[9.556855855855856 ms]
    2021-05-13 03:42:36,976         midp[59.921905405405404 ms]
    2021-05-13 03:42:36,976         postp[15.345945945945946 ms]
    2021-05-13 03:42:36,976         out[1.9921216216216215 ms]
    2021-05-13 03:42:36,976         idle[0.16254943864471572]
    2021-05-13 03:42:36,976 Op(det):
    2021-05-13 03:42:36,976         in[315.4468035714286 ms]
    2021-05-13 03:42:36,976         prep[69.5980625 ms]
    2021-05-13 03:42:36,976         midp[18.989535714285715 ms]
    2021-05-13 03:42:36,976         postp[18.857803571428573 ms]
    2021-05-13 03:42:36,977         out[3.1337544642857145 ms]
    2021-05-13 03:42:36,977         idle[0.7477961159203756]
    2021-05-13 03:42:36,977 DAGExecutor:
    2021-05-13 03:42:36,977         Query count[224]
    2021-05-13 03:42:36,977         QPS[22.4 q/s]
    2021-05-13 03:42:36,977         Succ[0.9910714285714286]
    2021-05-13 03:42:36,977         Error req[169, 170]
    2021-05-13 03:42:36,977         Latency:
    2021-05-13 03:42:36,977                 ave[535.1678348214285 ms]
    2021-05-13 03:42:36,977                 .50[172.651 ms]
    2021-05-13 03:42:36,977                 .60[187.904 ms]
    2021-05-13 03:42:36,977                 .70[245.675 ms]
    2021-05-13 03:42:36,977                 .80[526.684 ms]
    2021-05-13 03:42:36,977                 .90[854.596 ms]
    2021-05-13 03:42:36,977                 .95[1722.728 ms]
    2021-05-13 03:42:36,977                 .99[3990.292 ms]
    2021-05-13 03:42:36,978 Channel (server worker num[10]):
    2021-05-13 03:42:36,978         chl0(In: ['@DAGExecutor'], Out: ['det']) size[0/0]
    2021-05-13 03:42:36,979         chl1(In: ['det'], Out: ['rec']) size[6/0]
    2021-05-13 03:42:36,979         chl2(In: ['rec'], Out: ['@DAGExecutor']) size[0/0]
tink2123's avatar
add qps  
tink2123 committed
204
205
    ```

bjjwwang's avatar
win doc  
bjjwwang committed
206
207
## WINDOWS Users

WenmuZhou's avatar
WenmuZhou committed
208
Windows does not support Pipeline Serving, if we want to lauch paddle serving on Windows, we should use Web Service, for more infomation please refer to [Paddle Serving for Windows Users](https://github.com/PaddlePaddle/Serving/blob/develop/doc/Windows_Tutorial_EN.md)
bjjwwang's avatar
win doc  
bjjwwang committed
209
210


bjjwwang's avatar
bjjwwang committed
211
212
213
214
215
**WINDOWS user can only use version 0.5.0 CPU Mode**

**Prepare Stage:**

```
bjjwwang's avatar
bjjwwang committed
216
pip3 install paddle-serving-server==0.5.0
bjjwwang's avatar
bjjwwang committed
217
218
219
pip3 install paddle-serving-app==0.3.1
```

bjjwwang's avatar
win doc  
bjjwwang committed
220
221
222
223
1. Start Server

```
cd win
Thomas Young's avatar
Thomas Young committed
224
225
226
python3 ocr_web_server.py gpu(for gpu user)
or
python3 ocr_web_server.py cpu(for cpu user)
bjjwwang's avatar
win doc  
bjjwwang committed
227
228
229
230
231
232
233
```

2. Client Send Requests

```
python3 ocr_web_client.py
```
tink2123's avatar
add qps  
tink2123 committed
234

LDOUBLEV's avatar
LDOUBLEV committed
235
<a name="faq"></a>
LDOUBLEV's avatar
LDOUBLEV committed
236
## FAQ
MissPenguin's avatar
MissPenguin committed
237
**Q1**: No result return after sending the request.
LDOUBLEV's avatar
LDOUBLEV committed
238

MissPenguin's avatar
MissPenguin committed
239
**A1**: Do not set the proxy when starting the service and sending the request. You can close the proxy before starting the service and before sending the request. The command to close the proxy is:
LDOUBLEV's avatar
LDOUBLEV committed
240
241
242
243
```
unset https_proxy
unset http_proxy
```