Difference between revisions of "TensorRT/PerfIssues"

From eLinux.org
Jump to: navigation, search
Line 29: Line 29:
 
* Example of "./trtexec --deploy=ResNet-50-deploy.prototxt --output=prob --int8 --batch=8 --dumpProfile" <br>
 
* Example of "./trtexec --deploy=ResNet-50-deploy.prototxt --output=prob --int8 --batch=8 --dumpProfile" <br>
 
:Log - ''[[File:Trtexec log.txt|thumb]]''
 
:Log - ''[[File:Trtexec log.txt|thumb]]''
==== [Measure the Inference Time] ====
+
==== Measure the Inference Time ====
 
* [https://docs.nvidia.com/deeplearning/sdk/tensorrt-best-practices/index.html#cpu-timing CPU Timing]
 
* [https://docs.nvidia.com/deeplearning/sdk/tensorrt-best-practices/index.html#cpu-timing CPU Timing]
 
* [https://docs.nvidia.com/deeplearning/sdk/tensorrt-best-practices/index.html#cuda-events CUDA Events]
 
* [https://docs.nvidia.com/deeplearning/sdk/tensorrt-best-practices/index.html#cuda-events CUDA Events]
Line 35: Line 35:
 
:TensorRT also includes an optional CUDA event in the method IExecutionContext::enqueue that will be signaled once the input buffers are free to be reused. This allows the application to immediately start refilling the input buffer region for the next inference in parallel with finishing the current inference. For example:<br>
 
:TensorRT also includes an optional CUDA event in the method IExecutionContext::enqueue that will be signaled once the input buffers are free to be reused. This allows the application to immediately start refilling the input buffer region for the next inference in parallel with finishing the current inference. For example:<br>
 
<pre style="margin-left:30px; color:#B0B0B0; background-color:#111111; white-space:pre-wrap;">
 
<pre style="margin-left:30px; color:#B0B0B0; background-color:#111111; white-space:pre-wrap;">
<code>cudaEvent_t inputReady;
+
cudaEvent_t inputReady;
 
cudaEventCreate(&inputReady);
 
cudaEventCreate(&inputReady);
  
Line 43: Line 43:
 
// At this point we can refill the input buffers, but output buffers may not be done</code>
 
// At this point we can refill the input buffers, but output buffers may not be done</code>
 
</pre>
 
</pre>
 +
==== IProfiler ====
 +
IProfiler is a TensorRT Built-In TensorRT Profiling tool
 +
IProfiler interface is provided in the common sample code (common.h), below is sample change to apply IProfiler
 +
<pre style="margin-left:30px; color:#B0B0B0; background-color:#111111; white-space:pre-wrap;">
 +
--- sampleSSD.cpp.orig 2019-05-27 12:39:14.193521455 +0800
 +
+++ sampleSSD.cpp 2019-05-27 12:38:59.393358775 +0800
 +
@@ -428,8 +428,11 @@
 +
    float* detectionOut = new float[N * kKEEP_TOPK * 7];
 +
    int* keepCount = new int[N];
 +
 +
+    SimpleProfiler profiler (" layer time");
 +
+    context->setProfiler(&profiler);
 +
    // Run inference
 +
    doInference(*context, data, detectionOut, keepCount, N);
 +
+    std::cout << profiler;
 +
 +
    bool pass = true;
 +
</pre>
 +
With IProfiler, after inference finish, profiler reports the timing for each layer in the network like log below.<br>
 +
<gallery>
 +
Example.jpg|Caption1
 +
Example.jpg|Caption2
 +
</gallery>
 +
<br>
 +
 +
  
 
We strongly recommend you to run your network in multi-batch mode, so that GPU computation resource can be fully exhausted. It’s always true to see a better performance when inferencing through multi-batch mode, unless your network is deeper or complicated enough to get GPU drained.
 
We strongly recommend you to run your network in multi-batch mode, so that GPU computation resource can be fully exhausted. It’s always true to see a better performance when inferencing through multi-batch mode, unless your network is deeper or complicated enough to get GPU drained.

Revision as of 18:23, 14 January 2020

Quick Check

Basically, should follow below rules to get good perf.

CUDA Perspective

  • Only one CUDA context for one GPU
  1. Multiple CUDA context consumes extra memory
  2. The different CUDA contexts sharing the same GPU are time-sliced
  • Not use default CUDA stream
  1. Any CUDA command to the NULL stream will cause an implicit synchronization
  • Run parallel CUDA tasks, e.g. different TensorRT inference instances, on different CUDA streams
  • No cudaMalloc() called in main loop of the application

TensorRT Perspective

  1. Maximize the parallelism capability on GPU
  2. Save runtime memory consumption
  • Lower Precision Mode
  1. Lower precision has higher compute capability
  2. Lower precision consume less memory
  • Ensure there is enough workspace size for TensorRT inference
  1. builder->setMaxWorkspaceSize(1_GB); // TensorRT 5.1
  2. config->setMaxWorkspaceSize(1_GiB); // TensorRT 6.0

Profiler

There are many useful profiler tools that can help TensorRT user to find out the performance status.

trtexec

  • It's in TensorRT package (bin: TensorRT/bin/trtexec, code: TensorRT/samples/trtexec/)
  • lots of handy and useful options to support
  1. build model using different build options with or without weight/input/calib data, save the build TensorRT engine
  2. inference using different inference options with or without input, or simply inference with TensorRT engine
  3. inference profiling
  • Example of "./trtexec --deploy=ResNet-50-deploy.prototxt --output=prob --int8 --batch=8 --dumpProfile"
Log - File:Trtexec log.txt

Measure the Inference Time

Note: below event can be applied in application to improve the parallal capability of inference data preparation and inference
TensorRT also includes an optional CUDA event in the method IExecutionContext::enqueue that will be signaled once the input buffers are free to be reused. This allows the application to immediately start refilling the input buffer region for the next inference in parallel with finishing the current inference. For example:
cudaEvent_t inputReady;
cudaEventCreate(&inputReady);

context->enqueue(batchSize, &buffers[0], stream, &inputReady);
cudaEventSynchronize(inputReady);

// At this point we can refill the input buffers, but output buffers may not be done</code>

IProfiler

IProfiler is a TensorRT Built-In TensorRT Profiling tool IProfiler interface is provided in the common sample code (common.h), below is sample change to apply IProfiler

--- sampleSSD.cpp.orig	2019-05-27 12:39:14.193521455 +0800
+++ sampleSSD.cpp	2019-05-27 12:38:59.393358775 +0800
@@ -428,8 +428,11 @@
     float* detectionOut = new float[N * kKEEP_TOPK * 7];
     int* keepCount = new int[N];
 
+    SimpleProfiler profiler (" layer time");
+    context->setProfiler(&profiler);
     // Run inference
     doInference(*context, data, detectionOut, keepCount, N);
+    std::cout << profiler;
 
     bool pass = true;

With IProfiler, after inference finish, profiler reports the timing for each layer in the network like log below.



We strongly recommend you to run your network in multi-batch mode, so that GPU computation resource can be fully exhausted. It’s always true to see a better performance when inferencing through multi-batch mode, unless your network is deeper or complicated enough to get GPU drained.

  • Lower precision mode
    TensorRT supports inferencing in FP16 or INT8 mode. Generally, the speed will become faster from FP32 to FP16 to INT8.
    For FP16, it’s very simple to enable it.
builder->setFp16Mode(true);
For INT8, if you don’t care about the correctness or accuracy during network evaluation, you can simply use dummy dynamic range to get the network running in INT8,
samplesCommon::setAllTensorScales(network.get())
builder->setInt8Mode(true);
NOTE: if you finally decide to choose INT8 as the deployment mode, you have to implement the ICalibrator or set proper dynamic range for your network.
If you find the performance for INT8 or FP16 is not significantly improved, don't panic, let’s break down the issue step by step,
  • Dump the per layer time and compare it between FP32 and FP16 or INT8.
  • Figure out which layer takes the bigger or most time consumption. If it’s FC layers, you probably need to enable hybrid mode (enable both FP16 and INT8),
builder->setFp16Mode(true);
builder->setInt8Mode(true);
builder->setInt8Calibrator(&calibrator);
// or samplesCommon::setAllTensorScales(network.get())
  • If your network has many plugin layers or those plugin layers interlace through the whole network, TensorRT will insert many reformat layers to convert the data layout between normal layer and plugin layer. This reformat layer can be eliminated in some cases, for example, the network with PReLU (which can’t be supported by TensorRT 5.1 or prior versions). User could consider to replace PRelu with Leaky Relu which is the native layer if this wouldn't decrease the accuracy a lot.
  • Generally, the speedup for lower precision mode mainly comes from convolution layer, if the total time of convolution layer takes a little part of your network inferencing time, it’s expected that lower precision FP16 or INT8 can’t help the network performance a lot. In this case, you should consider how to optimize those non-convolution layer or feed back to NVIDIA for any potential advice.
  • Network pruning
    This is beyond what TensorRT could help, but this approach should be in mind also for network optimization, like network pruning way provided in NVIDIA TLT.
    Standing on GPU HW perspective, there are also some tricks when we design or prune the networks, for example, Tensor core on T4 or Xavier will be more friendly to these convolution cases of which channels are multiplier of 32 or 64. Hence this should give you a sense when you design or prune your feature extraction layers.