README.md 5.81 KB
Newer Older
lijian6's avatar
lijian6 committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
<!--
Copyright (c) 2020-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
 * Redistributions of source code must retain the above copyright
   notice, this list of conditions and the following disclaimer.
 * Redistributions in binary form must reproduce the above copyright
   notice, this list of conditions and the following disclaimer in the
   documentation and/or other materials provided with the distribution.
 * Neither the name of NVIDIA CORPORATION nor the names of its
   contributors may be used to endorse or promote products derived
   from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-->

# Triton Performance Analyzer

Triton Performance Analyzer is CLI tool which can help you optimize the
inference performance of models running on Triton Inference Server by measuring
changes in performance as you experiment with different optimization strategies.

<br>

# Features

### Inference Load Modes

- [Concurrency Mode](docs/inference_load_modes.md#concurrency-mode) simlulates
  load by maintaining a specific concurrency of outgoing requests to the
  server

- [Request Rate Mode](docs/inference_load_modes.md#request-rate-mode) simulates
  load by sending consecutive requests at a specific rate to the server

- [Custom Interval Mode](docs/inference_load_modes.md#custom-interval-mode)
  simulates load by sending consecutive requests at specific intervals to the
  server

### Performance Measurement Modes

- [Time Windows Mode](docs/measurements_metrics.md#time-windows) measures model
  performance repeatedly over a specific time interval until performance has
  stabilized

- [Count Windows Mode](docs/measurements_metrics.md#count-windows) measures
  model performance repeatedly over a specific number of requests until
  performance has stabilized

### Other Features

- [Sequence Models](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/architecture.md#stateful-models)
  and
  [Ensemble Models](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/architecture.md#ensemble-models)
  can be profiled in addition to standard/stateless models

- [Input Data](docs/input_data.md) to model inferences can be auto-generated or
  specified as well as verifying output

- [TensorFlow Serving](docs/benchmarking.md#benchmarking-tensorflow-serving) and
  [TorchServe](docs/benchmarking.md#benchmarking-torchserve) can be used as the
  inference server in addition to the default Triton server

<br>

# Quick Start

The steps below will guide you on how to start using Perf Analyzer.

### Step 1: Start Triton Container

```bash
export RELEASE=<yy.mm> # e.g. to use the release from the end of February of 2023, do `export RELEASE=23.02`

docker pull nvcr.io/nvidia/tritonserver:${RELEASE}-py3

docker run --gpus all --rm -it --net host nvcr.io/nvidia/tritonserver:${RELEASE}-py3
```

### Step 2: Download `simple` Model

```bash
# inside triton container
git clone --depth 1 https://github.com/triton-inference-server/server

mkdir model_repository ; cp -r server/docs/examples/model_repository/simple model_repository
```

### Step 3: Start Triton Server

```bash
# inside triton container
tritonserver --model-repository $(pwd)/model_repository &> server.log &

# confirm server is ready, look for 'HTTP/1.1 200 OK'
curl -v localhost:8000/v2/health/ready

# detach (CTRL-p CTRL-q)
```

### Step 4: Start Triton SDK Container

```bash
docker pull nvcr.io/nvidia/tritonserver:${RELEASE}-py3-sdk

docker run --gpus all --rm -it --net host nvcr.io/nvidia/tritonserver:${RELEASE}-py3-sdk
```

### Step 5: Run Perf Analyzer

```bash
# inside sdk container
perf_analyzer -m simple
```

See the full [quick start guide](docs/quick_start.md) for additional tips on
how to analyze output.

<br>

# Documentation

- [Installation](docs/install.md)
- [Perf Analyzer CLI](docs/cli.md)
- [Inference Load Modes](docs/inference_load_modes.md)
- [Input Data](docs/input_data.md)
- [Measurements & Metrics](docs/measurements_metrics.md)
- [Benchmarking](docs/benchmarking.md)

<br>

# Contributing

Contributions to Triton Perf Analyzer are more than welcome. To contribute
please review the [contribution
guidelines](https://github.com/triton-inference-server/server/blob/main/CONTRIBUTING.md),
then fork and create a pull request.

<br>

# Reporting problems, asking questions

We appreciate any feedback, questions or bug reporting regarding this
project. When help with code is needed, follow the process outlined in
the Stack Overflow (https://stackoverflow.com/help/mcve)
document. Ensure posted examples are:

- minimal - use as little code as possible that still produces the
  same problem

- complete - provide all parts needed to reproduce the problem. Check
  if you can strip external dependency and still show the problem. The
  less time we spend on reproducing problems the more time we have to
  fix it

- verifiable - test the code you're about to provide to make sure it
  reproduces the problem. Remove all other problems that are not
  related to your request/question.