Jump to: navigation, search, this link is for nvidia Deepstream sdk full introduction


1Q: DeepStream SDK 3.0 on Jetson AGX Xavier, deepstream-app -c configs/deepstream-app/source30_720p_dec_infer-resnet_tiled_display_int8.txt

(gst-plugin-scanner:28556): GStreamer-WARNING **: 09:31:40.171: Failed to load plugin '/usr/lib/aarch64-linux-gnu/gstreamer-1.0/': /usr/lib/aarch64-linux-

gnu/tegra/ undefined symbol: initLibNvInferPlugins

1A: objdump -txT /usr/lib/aarch64-linux-gnu/gstreamer-1.0/ |grep initLibNvInferPlugins, return nothing means initLibNvInferPlugins not defined, TRT changed the

API name from 5.0.0 to 5.0.3 release version

2 Irespective Tesla or Tegra, try to run deepstream-app from config under directory /ds package path/samples/configs/deepstream-app/ other than source30*** or source4*** is not

a right way, these files is part of source4***, or source30***, should run with config source4***, or source30***

3Q: MJPEG as input source, "Conversion not supported... gst_nvvidconv_planar_to_planar_conversion, 773"

3A: We do not support MJPEG as input now.

4Q: Do trt-yolo-app support video stream as input

4A: Video stream input not supported now, just images as input

5Q: Customer commonly met sometimes need to output to screen, but just with Tesla card which used as compute card, 2 ways to get through


1. Output to sink type 1 Fakesink or 3 File;

2. A hacky way to use Tesla p4 for virtual display, but just suggest for developments, since it will take some percent of device memory, finally impact the inference perf

     first need to install nvidia graphic driver with opengl installed
sudo nvidia-xconfig --query-gpu-info Number of GPUs: 2 GPU #0: Name : Tesla T4 UUID : GPU-b58f5878-b235-c28e-4e2a-44d8623d133a PCI BusID : PCI:3:0:0 Number of Display Devices: 0 GPU #1: Name : Tesla P4 UUID : GPU-55bc88aa-fc94-0e86-9319-abd5fadf49ab PCI BusID : PCI:4:0:0 Number of Display Devices: 0 sudo nvidia-xconfig --busid=PCI:4:0:0 --allow-empty-initial-configuration reboot system, install nomachine from your windows system, and also need install nomachine on your linux server which have p4 installed, and login to desktop using nomachine from windows system.

6Q: When run deepstream-app with config source4_720p_dec_infer-resnet_tracker_sgie_tiled_display_int8_gpu1.txt, have error "Unable to set device in


6A: Please make sure system have greater equal 2 gpu cards, this config use device id 1, or you can try other config with gpu id set to 0

7Q: yolo plugin:, make error, gstyoloplugin.h:36:25: fatal error: gst-nvquery.h: No such file or directory

7A: Please see /path to your yolo package/deepstream-plugins-yolo/Makefile.config DEEPSTREAM_INSTALL_DIR:=, make sure this points to your unpacked deepstream package location

8Q: Use uff file for nvinfer plugin, input-dims={-1,224,224,3}, "ERROR from element secondary3-nvinference-engine: Failed to parse config file:car.txt"

8A: One format for reference, input-dims=3;224;224;1, the forth number should be 0 or 1

9Q: Use USB camera as video input for deepstream on Tegra, the width and height can only be supported up to 640x480, if set it to 1280x720, the app will crash.

9A: For type 2(CameraV4L2) in config, you can not use format other than the camera support

10Q: deepstream-yolo-app: yolo.cpp:135: void Yolo::createYOLOEngine(int, std::__cxx11::string, std::__cxx11::string, std::__cxx11::string, nvinfer1::DataType,

Int8EntropyCalibrator*): Assertion `fileExists(yoloConfigPath)' failed.

10A: Please make sure below params set the path correct, in file /path to lib/lib/network_config.cpp

  const std::string kDS_LIB_PATH = "/path to ds package/sources/gst-plugins/Deepstream_DsExample_Plugin-master/sources/lib/";
  const std::string kMODELS_PATH = kDS_LIB_PATH + "models/";
  const std::string kDETECTION_RESULTS_PATH = "../../../data/detections/";
  const std::string kCALIBRATION_SET = "../../../data/calibration_images.txt";
  const std::string kTEST_IMAGES = "../../../data/test_images.txt";
  // Model V2 specific common global vars
  #ifdef MODEL_V2
  const float kPROB_THRESH = 0.5f;
  const float kNMS_THRESH = 0.5f;
  const std::string kYOLO_CONFIG_PATH = "../../../data/yolov2.cfg";
  const std::string kTRAINED_WEIGHTS_PATH = "../../../data/yolov2.weights";
  const std::string kNETWORK_TYPE = "yolov2";
  const std::string kCALIB_TABLE_PATH = kDS_LIB_PATH + "calibration/yolov2-calibration.table";
  and /path to data/data/test_images.txt, content inside use the correct path for the test image, and make sure you have the test images under the path.

11Q: yolo plugin make error, *** "CUDA_VER variable is not set in Makefile.config". Stop

11A: Please modify all the variables in Makefile.config under the plugin dir, a clip for reference

      CUDA_VER:= 9.2 
      #Set to TEGRA for jetson or TESLA for dGPU's 
      #For Tesla Plugins 
      OPENCV_INSTALL_DIR:= /home/tse/cuse/opencv3.4/opencv-3.4.0/install 
      TENSORRT_INSTALL_DIR:= /home/tse/cuse/trt4.0.1.6/usr 
      DEEPSTREAM_INSTALL_DIR:= /home/tse/work/deepstream/DeepStream_Release 
      #CUDNN_DIR := /home/tse/work/cpxavier/cudnn7.3.0.23/cuda

12Q: Cuda dependency, Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.0/': cannot open shared object file: No such file or


12A: If met error similar like this, please install related cuda version

13Q: dlopen "" failed! Segmentation fault (core dumped) when run deepstream-app

13A: i.e. sudo ln -s /var/lib/dkms/nvidia/396.26/build/ /usr/lib/x86_64-linux-gnu/

14Q: If run deepstream-app met error similar like this, "Failed to create 'sink_sub_bin_sink1'", or run sample like deepstream-test1-app, met like this "decoder --- FAIL One

element could not be created. Exiting."

14A: Either follow README install nvidia related plugin to /usr/lib/x86_64-linux-gnu/gstreamer-1.0 or need to export GST_PLUGIN_PATH to /path to package

dir/DeepStream_Release/usr/lib/x86_64-linux-gnu/gstreamer-1.0 and export environment LD_LIBRARY_PATH to point to /path to package dir/DeepStream_Release/usr/local/deepstream/,

tensorrt lib, cudnn lib,opencv lib, make sure use the correct version

15Q: How to use multi ip cameras with deepstream?


1. First get software from camera vendor, install and configue the cameras

2. Change config from deepstream

         #Type - 1=CameraV4L2 2=URI 3=MultiURI
         #Type - 1=CameraV4L2 2=URI 3=MultiURI

3. run, /path to DeepStream_Release/usr/bin/deepstream-app -c "your configuration file path"

16Q: When run deepstream-app met error like this, "Unable to set device in gst_nvstreammux_change_state"

16A: Please check if nvidia-smi works and the deviceQuery sample in cuda samples works