# Triton Performance Analyzer
Triton Performance Analyzer is CLI tool which can help you optimize the
inference performance of models running on Triton Inference Server by measuring
changes in performance as you experiment with different optimization strategies.
# Features
### Inference Load Modes
- [Concurrency Mode](docs/inference_load_modes.md#concurrency-mode) simlulates
load by maintaining a specific concurrency of outgoing requests to the
server
- [Request Rate Mode](docs/inference_load_modes.md#request-rate-mode) simulates
load by sending consecutive requests at a specific rate to the server
- [Custom Interval Mode](docs/inference_load_modes.md#custom-interval-mode)
simulates load by sending consecutive requests at specific intervals to the
server
### Performance Measurement Modes
- [Time Windows Mode](docs/measurements_metrics.md#time-windows) measures model
performance repeatedly over a specific time interval until performance has
stabilized
- [Count Windows Mode](docs/measurements_metrics.md#count-windows) measures
model performance repeatedly over a specific number of requests until
performance has stabilized
### Other Features
- [Sequence Models](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/architecture.md#stateful-models)
and
[Ensemble Models](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/architecture.md#ensemble-models)
can be profiled in addition to standard/stateless models
- [Input Data](docs/input_data.md) to model inferences can be auto-generated or
specified as well as verifying output
- [TensorFlow Serving](docs/benchmarking.md#benchmarking-tensorflow-serving) and
[TorchServe](docs/benchmarking.md#benchmarking-torchserve) can be used as the
inference server in addition to the default Triton server
# Quick Start
The steps below will guide you on how to start using Perf Analyzer.
### Step 1: Start Triton Container
```bash
export RELEASE= # e.g. to use the release from the end of February of 2023, do `export RELEASE=23.02`
docker pull nvcr.io/nvidia/tritonserver:${RELEASE}-py3
docker run --gpus all --rm -it --net host nvcr.io/nvidia/tritonserver:${RELEASE}-py3
```
### Step 2: Download `simple` Model
```bash
# inside triton container
git clone --depth 1 https://github.com/triton-inference-server/server
mkdir model_repository ; cp -r server/docs/examples/model_repository/simple model_repository
```
### Step 3: Start Triton Server
```bash
# inside triton container
tritonserver --model-repository $(pwd)/model_repository &> server.log &
# confirm server is ready, look for 'HTTP/1.1 200 OK'
curl -v localhost:8000/v2/health/ready
# detach (CTRL-p CTRL-q)
```
### Step 4: Start Triton SDK Container
```bash
docker pull nvcr.io/nvidia/tritonserver:${RELEASE}-py3-sdk
docker run --gpus all --rm -it --net host nvcr.io/nvidia/tritonserver:${RELEASE}-py3-sdk
```
### Step 5: Run Perf Analyzer
```bash
# inside sdk container
perf_analyzer -m simple
```
See the full [quick start guide](docs/quick_start.md) for additional tips on
how to analyze output.
# Documentation
- [Installation](docs/install.md)
- [Perf Analyzer CLI](docs/cli.md)
- [Inference Load Modes](docs/inference_load_modes.md)
- [Input Data](docs/input_data.md)
- [Measurements & Metrics](docs/measurements_metrics.md)
- [Benchmarking](docs/benchmarking.md)
# Contributing
Contributions to Triton Perf Analyzer are more than welcome. To contribute
please review the [contribution
guidelines](https://github.com/triton-inference-server/server/blob/main/CONTRIBUTING.md),
then fork and create a pull request.
# Reporting problems, asking questions
We appreciate any feedback, questions or bug reporting regarding this
project. When help with code is needed, follow the process outlined in
the Stack Overflow (https://stackoverflow.com/help/mcve)
document. Ensure posted examples are:
- minimal - use as little code as possible that still produces the
same problem
- complete - provide all parts needed to reproduce the problem. Check
if you can strip external dependency and still show the problem. The
less time we spend on reproducing problems the more time we have to
fix it
- verifiable - test the code you're about to provide to make sure it
reproduces the problem. Remove all other problems that are not
related to your request/question.