README 3.46 KB
Newer Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
*****************************************************************************
* SPDX-FileCopyrightText: Copyright (c) 2018-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: LicenseRef-NvidiaProprietary
*
* NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
* property and proprietary rights in and to this material, related
* documentation and any modifications thereto. Any use, reproduction,
* disclosure or distribution of this material and related documentation
* without an express license agreement from NVIDIA CORPORATION or
* its affiliates is strictly prohibited.
*****************************************************************************

*****************************************************************************
                     deepstream-test1-app
                             README
*****************************************************************************

===============================================================================
1. Prerequisites:
===============================================================================
Please follow instructions in the apps/sample_apps/deepstream-app/README on how
to install the prerequisites for Deepstream SDK, the DeepStream SDK itself and the
apps.

You must have the following development packages installed
   GStreamer-1.0
   GStreamer-1.0 Base Plugins
   GStreamer-1.0 gstrtspserver
   X11 client-side library

To install these packages, execute the following command:
   sudo apt-get install libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev \
   libgstrtspserver-1.0-dev libx11-dev

This example can be configured to use either the nvinfer or the nvinferserver
element for inference.
If nvinferserver is selected, the Triton Inference Server is used for inference
processing. In this case, the example needs to be run inside the
DeepStream-Triton docker container. Please refer
samples/configs/deepstream-app-triton/README for the steps to download the
container image and setup model repository.

===============================================================================
2. Purpose:
===============================================================================

This document shall describe about the sample deepstream-test1 application.

It is meant for demonstration of how to use the various DeepStream SDK
elements in the pipeline and extract meaningful insights from a video stream.

===============================================================================
3. To compile:
===============================================================================

  $ Set CUDA_VER in the MakeFile as per platform.
      For both Jetson & x86, CUDA_VER=12.6
  $ sudo make (sudo not required in case of docker containers)

===============================================================================
4. Usage:
===============================================================================
terminal 1:
  mosquitto -c mosquitto.conf
terminal 2:
  mosquitto_sub -v -h 127.0.0.1 -t /server/fromArm
terminal 3:
  /usr/bin/gst-launch-1.0 filesrc location=/workspace/shared_docker/video/cr7_1920x1080.h264  ! h264parse ! mach264dec! queue name=q1 ! mux.sink_0 nvstreammux name=mux batch-size=1 ! queue name=q2 ! nvinferserver config-file-path=./yolov5_nvinferserver_config.txt  ! nvmsgconv config=./dstest4_msgconv_config.txt  payload-type=1 msg2p-newapi=1 ! nvmsgbroker proto-lib=/opt/deepstream/lib/libnvds_mqtt_proto.so conn-str="127.0.0.1;1883" topic="/server/fromArm" sync=0