|(80 intermediate revisions by 3 users not shown)|
== FAQ ==
== FAQ ==
| || |
big> 1. How to check TensorRT version?< /big> ===== |+|
|−|There are two methods to check TensorRT version, |+|
|−|* Symbols from library |+|
|−|$ nm -D / usr/ lib// aarch64-linux-gnu/ libnvinfer.so | grep "tensorrt" |+|
|−|0000000007849eb0 B tensorrt_build_svc_tensorrt_20181028_25152976 |+|
|−|0000000007849eb4 B tensorrt_version_5_0_3_2 |+|
|−|NOTE: 20181028 is the build date and 25152976 is the top changelist and 5_0_3_2 is the version information.<br> |+|
|−|* Macros from header file |+|
|−|$ cat / usr/include/aarch64-linux-gnu/ NvInfer. h | grep "define NV_TENSORRT" |+|
|−|#define NV_TENSORRT_MAJOR 5 // !< TensorRT major version. |+|
the is the .<br>
|−|#define NV_TENSORRT_MINOR 0 // !< TensorRT minor version. |+|
|−|#define NV_TENSORRT_PATCH 3 // !< TensorRT patch version. |+|
|−|#define NV_TENSORRT_BUILD 2 // !< TensorRT build number. |+|
|−|#define NV_TENSORRT_SONAME_MAJOR 5 / /!< Shared object library major version number. |+|
|−|#define NV_TENSORRT_SONAME_MINOR 0 //!< Shared object library minor version number. |+|
|−|#define NV_TENSORRT_SONAME_PATCH 3 //!< Shared object library patch version number. |+|
== <big>2. Whether TRT support thread-safe?</big> ===== |+|
|−|TensorRT runtime is thread-safe in the sense that parallel threads using different TRT Execution Contexts can execute in parallel without interference. |+|
|−|===== < big> 3. Can INT8 calibration table be compatible among different TRT version or HW platform?< /big> ===== |+|
|−|INT8 calibration table is absolutely NOT compatible between different TRT versions. This is because the optimized network graph is probably different among various TRT versions. If you enforce to use them, TRT may not find the corresponding scaling factor for given tensor.<br> |+|
=== TRT ===
|−|As long as the installed TensorRT version is identical for different HW platforms, then the INT8 calibration table can be compatible. That means you can perform INT8 calibration on a faster computation platform, like V100 or P4 and then deploy the calibration table to Tegra for INT8 inferencing. |+|
== <big>4. How to check GPU utilization?</big> ===== |+|
|−|On Tegra platform, we can use tegrastats to achieve that, |+|
|−|$ sudo / home/ nvidia/ tegrastats |+|
If you , find the .<br>
|−|On Desktop platform, like Tesla, we can use nvidia-smi to achieve that, |+|
|−|$ nvidia-smi --format=csv -lms 500 -- query- gpu=index,timestamp,utilization.gpu,clocks.current.graphics,clocks.current.sm,clocks.current.video,clocks.current.memory,utilization.memory,memory.total,memory.free,memory.used,power.limit,power.draw,temperature.gpu,fan.speed,compute_mode,gpu_operation_mode.current,clocks_throttle_reasons.active,pstate,clocks_throttle_reasons.hw_slowdown,clocks_throttle_reasons.gpu_idle,clocks_throttle_reasons.applications_clocks_setting,clocks_throttle_reasons.sw_power_cap,clocks_throttle_reasons.sync_boost - i 0 | tee log.cs |+|
=== How to ===
== <big>5. What is kernel auto-tuning?</big> ===== |+|
|−|TensorRT contains various kernel implementations, including those existing in CUDNN and CUBLAS, to accommodate diverse neural network configurations (batch, input/ output dims, filters, strides, pads, dilation rate and etc). During network building, TensorRT will profile all suitable kernels and find out the best one with the smallest latency, and then mark it as the final tactic to run the certain layer. We call this process as kernel auto-tuning. |+|
|−|Additionally, it’s not always true that INT8 kernel faster than FP16’s than FP32’s, so |+|
|−|* if you run FP16 precision mode, it profiles all candidates in FP16 kernel pool and FP32 kernel pool. |+|
|−|* if you run INT8 precision mode, it profiles all candidates in INT8 kernel pool and FP32 kernel pool. |+|
|−|* if both FP16 and INT8 are enabled (we call it hybrid mode), it profiles all candidate in INT8 kernel pool, FP16 kernel pool and FP32 kernel pool. |+|
|−|If current layer chooses different mode as its bottom layer or top layer, TensorRT will insert a reformatting layer between them to do the tensor format conversion, and the time for this reformatting layer will be taken into account as the cost of current layer during auto-tuning. |+|
we all in different TensorRT .
NVIDIA TensorRT™ is a platform for high-performance deep learning inference. It includes a deep learning inference optimizer and runtime that delivers low latency and high-throughput for deep learning inference applications. TensorRT-based applications perform up to 40x faster than CPU-only platforms during inference. With TensorRT, you can optimize neural network models trained in all major frameworks, calibrate for lower precision with high accuracy, and finally deploy to hyperscale data centers, embedded, or automotive product platforms.
TensorRT Developer Guide
TensorRT Developer Guide#FAQs
You can find answers here for some common questions about using TRT.
Refer to the page TensorRT/CommonFAQ
TRT Accuracy FAQ
If your FP16 result or Int8 result is not as expected, below page may help you fix the accuracy issues.
Refer to the page TensorRT/AccuracyIssues
TRT Performance FAQ
If the performance of doing inference with TRT is not as expected, below page may help you to optimize the performance.
Refer to the page TensorRT/PerfIssues
TRT Int8 Calibration FAQ
Below page will present some FAQs about TRT Int8 Calibration.
Refer to the page TensorRT/Int8CFAQ
TRT Plugin FAQ
Below page will present some FAQs about TRT Plugin.
Refer to the page TensorRT/PluginFAQ
How to fix some Common Errors
If you met some Errors during using TRT, please find from below page for the answer.
Refer to the page TensorRT/CommonErrorFix
How to debug or analyze
below page will help you debugging your inferencing in some ways.
Refer to the page TensorRT/How2Debug
TRT & YoloV3 FAQ
Refer to the page TensorRT/YoloV3
Here we list all the known issue that has been clarified in different TensorRT versions.