Difference between revisions of "Jetson Zoo"

From eLinux.org
Jump to: navigation, search
m (IoT / Edge)
m
 
(63 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
This page contains instructions for installing various open source add-on packages and frameworks on [[Jetson|NVIDIA Jetson]], in addition to a collection of DNN models for inferencing.
 
This page contains instructions for installing various open source add-on packages and frameworks on [[Jetson|NVIDIA Jetson]], in addition to a collection of DNN models for inferencing.
  
Below are links to precompiled binaries built for aarch64 (arm64) architecture, including support for CUDA where applicable.  These are intended to be installed on top of [https://developer.nvidia.com/embedded/jetpack JetPack].  
+
Below are links to container images and precompiled binaries built for aarch64 (arm64) architecture\.  These are intended to be installed on top of [https://developer.nvidia.com/embedded/jetpack JetPack].  
  
 
Note that JetPack comes with various pre-installed components such as the L4T kernel, CUDA Toolkit, cuDNN, TensorRT, VisionWorks, OpenCV, GStreamer, Docker, and more.  
 
Note that JetPack comes with various pre-installed components such as the L4T kernel, CUDA Toolkit, cuDNN, TensorRT, VisionWorks, OpenCV, GStreamer, Docker, and more.  
 
For the latest updates and support, refer to the listed forum topics.  Feel free to contribute to the list below if you know of software packages that are working & tested on Jetson.
 
  
 
= Machine Learning =
 
= Machine Learning =
Line 12: Line 10:
  
 
There are also helpful deep learning examples and tutorials available, created specifically for Jetson - like Hello AI World and JetBot.
 
There are also helpful deep learning examples and tutorials available, created specifically for Jetson - like Hello AI World and JetBot.
 +
 +
== Docker Containers ==
 +
 +
[[File:NGC_containers.png|185px|right]]
 +
There are ready-to-use ML and data science containers for Jetson hosted on [https://ngc.nvidia.com/catalog/all?orderBy=modifiedDESC&pageNumber=0&query=l4t&quickFilter=&filters= NVIDIA GPU Cloud] (NGC), including the following:
 +
 +
* [https://ngc.nvidia.com/catalog/containers/nvidia:l4t-tensorflow l4t-tensorflow] - TensorFlow for JetPack 4.4
 +
* [https://ngc.nvidia.com/catalog/containers/nvidia:l4t-pytorch l4t-pytorch] - PyTorch for JetPack 4.4
 +
* [https://ngc.nvidia.com/catalog/containers/nvidia:l4t-ml l4t-ml] - TensorFlow, PyTorch, scikit-learn, scipy, pandas, JupyterLab, ect.
 +
 +
If you wish to modify them, the Dockerfiles and build scripts for these containers can be found on [https://github.com/dusty-nv/jetson-containers GitHub].
 +
 +
There are also following ready-to-use ML containers for Jetson hosted by our partners:
 +
* ONNX Runtime for Jetson: [https://mcr.microsoft.com/azureml/onnxruntime:v.1.4.0-jetpack4.4-l4t-base-r32.4.3 mcr.microsoft.com/azureml/onnxruntime:v.1.4.0-jetpack4.4-l4t-base-r32.4.3]
 +
 +
These containers are highly recommended to reduce the installation time of the frameworks below, and for beginners getting started.
 +
 +
<br/ >
  
 
== TensorFlow ==
 
== TensorFlow ==
Line 18: Line 34:
 
* Website:  [https://www.tensorflow.org/ https://tensorflow.org]
 
* Website:  [https://www.tensorflow.org/ https://tensorflow.org]
 
* Source:  [https://github.com/tensorflow/tensorflow https://github.com/tensorflow/tensorflow]
 
* Source:  [https://github.com/tensorflow/tensorflow https://github.com/tensorflow/tensorflow]
* Version:  1.13.1
+
* Container:  [https://ngc.nvidia.com/catalog/containers/nvidia:l4t-tensorflow l4t-tensorflow]
* Packages:  [https://developer.download.nvidia.com/compute/redist/jp/v42/tensorflow-gpu/tensorflow_gpu-1.13.1+nv19.5-cp36-cp36m-linux_aarch64.whl pip wheel] (Python 3.6)
+
* Version:  1.15.2, 2.2
* Supports:  JetPack 4.2.x (Jetson Nano / TX2 / Xavier)
+
* Packages:
* Install Guide:  [https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html#prereqs]
+
** JetPack 4.4 (L4T R32.4.3)
 +
*** [https://developer.download.nvidia.com/compute/redist/jp/v44/tensorflow/tensorflow-1.15.2+nv20.6-cp36-cp36m-linux_aarch64.whl 1.15.2 pip wheel] (Python 3.6)
 +
*** [https://developer.download.nvidia.com/compute/redist/jp/v44/tensorflow/tensorflow-2.2.0+nv20.6-cp36-cp36m-linux_aarch64.whl 2.2 pip wheel] &nbsp;(Python 3.6)
 +
** JetPack 4.4 Developer Preview (L4T R32.4.2)
 +
*** [https://developer.download.nvidia.com/compute/redist/jp/v44/tensorflow/tensorflow-1.15.2+nv20.4-cp36-cp36m-linux_aarch64.whl 1.15.2 pip wheel] (Python 3.6)
 +
*** [https://developer.download.nvidia.com/compute/redist/jp/v44/tensorflow/tensorflow-2.1.0+nv20.4-cp36-cp36m-linux_aarch64.whl 2.1 pip wheel] &nbsp;(Python 3.6)
 +
** JetPack 4.3  
 +
*** [https://developer.download.nvidia.com/compute/redist/jp/v43/tensorflow/tensorflow-1.15.2+nv20.3-cp36-cp36m-linux_aarch64.whl 1.15.2 pip wheel] (Python 3.6)
 +
*** [https://developer.download.nvidia.com/compute/redist/jp/v43/tensorflow/tensorflow-2.1.0+nv20.3-cp36-cp36m-linux_aarch64.whl 2.1 pip wheel] &nbsp;(Python 3.6)
 +
* Supports:  JetPack >= 4.2 (Jetson Nano / TX1 / TX2 / Xavier NX / AGX Xavier)
 +
* Install Guide:  [https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html#prereqs Installing TensorFlow on Jetson]
 
* Forum Topic:  [https://devtalk.nvidia.com/default/topic/1048776/jetson-nano/official-tensorflow-for-jetson-nano-/ devtalk.nvidia.com/default/topic/1048776/jetson-nano/official-tensorflow-for-jetson-nano-/]
 
* Forum Topic:  [https://devtalk.nvidia.com/default/topic/1048776/jetson-nano/official-tensorflow-for-jetson-nano-/ devtalk.nvidia.com/default/topic/1048776/jetson-nano/official-tensorflow-for-jetson-nano-/]
 
* Build from Source:  [https://devtalk.nvidia.com/default/topic/1055131/jetson-agx-xavier/building-tensorflow-1-13-on-jetson-xavier/ https://devtalk.nvidia.com/default/topic/1055131/jetson-agx-xavier/building-tensorflow-1-13-on-jetson-xavier/]
 
* Build from Source:  [https://devtalk.nvidia.com/default/topic/1055131/jetson-agx-xavier/building-tensorflow-1-13-on-jetson-xavier/ https://devtalk.nvidia.com/default/topic/1055131/jetson-agx-xavier/building-tensorflow-1-13-on-jetson-xavier/]
 
<source lang="bash">
 
<source lang="bash">
 
# install prerequisites
 
# install prerequisites
$ sudo apt-get install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev
+
$ sudo apt-get install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev liblapack-dev libblas-dev gfortran
  
 
# install and upgrade pip3
 
# install and upgrade pip3
 
$ sudo apt-get install python3-pip
 
$ sudo apt-get install python3-pip
$ sudo pip3 install -U pip
+
$ sudo pip3 install -U pip testresources setuptools
  
 
# install the following python packages
 
# install the following python packages
$ sudo pip3 install -U numpy grpcio absl-py py-cpuinfo psutil portpicker six mock requests gast h5py astor termcolor protobuf keras-applications keras-preprocessing wrapt google-pasta
+
$ sudo pip3 install -U numpy==1.16.1 future==0.17.1 mock==3.0.5 h5py==2.9.0 keras_preprocessing==1.0.5 keras_applications==1.0.8 gast==0.2.2 futures protobuf pybind11
 +
 
 +
# to install TensorFlow 1.15 for JetPack 4.4:
 +
$ sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 ‘tensorflow<2’
 +
 
 +
# or install the latest version of TensorFlow (2.2) for JetPack 4.4:
 +
$ sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 tensorflow
  
# install the latest version of TensorFlow
 
$ sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu
 
 
</source>
 
</source>
  
Line 44: Line 74:
 
* Website:  [https://pytorch.org/ https://pytorch.org/]
 
* Website:  [https://pytorch.org/ https://pytorch.org/]
 
* Source:  [https://github.com/pytorch/pytorch https://github.com/pytorch/pytorch]
 
* Source:  [https://github.com/pytorch/pytorch https://github.com/pytorch/pytorch]
* Version:  PyTorch v1.0.0 - v1.2.0
+
* Container:  [https://ngc.nvidia.com/catalog/containers/nvidia:l4t-pytorch l4t-pytorch]
 +
* Version:  PyTorch v1.0.0 - v1.6.0
 
* Packages:
 
* Packages:
 +
{{spaces|6}} '''JetPack 4.4 (L4T R32.4.3)'''
 +
 +
{{spaces|6}} * [https://nvidia.box.com/shared/static/9eptse6jyly1ggt9axbja2yrmj6pbarc.whl PyTorch v1.6.0 pip wheel] (Python 3.6)
 +
 +
{{spaces|6}} '''JetPack 4.4 Developer Preview (L4T R32.4.2)'''
 +
{| class="wikitable" style="margin-left:25px"
 +
|-
 +
! !! Python 2.7 !! Python 3.6
 +
|-
 +
| v1.2.0 || [https://nvidia.box.com/shared/static/8673jprub45abv3tmx4sscdllpbinoh9.whl pip wheel] || [https://nvidia.box.com/shared/static/lufbgr3xu2uha40cs9ryq1zn4kxsnogl.whl pip wheel]
 +
|-
 +
| v1.3.0 || [https://nvidia.box.com/shared/static/fctwivi70bsxxvd03kcgpssgbmvlpor5.whl pip wheel] || [https://nvidia.box.com/shared/static/017sci9z4a0xhtwrb4ps52frdfti9iw0.whl pip wheel]
 +
|-
 +
| v1.4.0 || [https://nvidia.box.com/shared/static/yhlmaie35hu8jv2xzvtxsh0rrpcu97yj.whl pip wheel] || [https://nvidia.box.com/shared/static/c3d7vm4gcs9m728j6o5vjay2jdedqb55.whl pip wheel]
 +
|-
 +
| v1.5.0 || || [https://nvidia.box.com/shared/static/3ibazbiwtkl181n95n9em3wtrca7tdzp.whl pip wheel]
 +
|}
 +
 +
{{spaces|6}} '''JetPack 4.2 / 4.3'''
 
{| class="wikitable" style="margin-left:25px"
 
{| class="wikitable" style="margin-left:25px"
 
|-
 
|-
Line 55: Line 105:
 
|-
 
|-
 
| v1.2.0 || [https://nvidia.box.com/v/torch-1-2-cp27-jetson-jp421 pip wheel] || [https://nvidia.box.com/v/torch-1-2-cp36-jetson-jp421 pip wheel]
 
| v1.2.0 || [https://nvidia.box.com/v/torch-1-2-cp27-jetson-jp421 pip wheel] || [https://nvidia.box.com/v/torch-1-2-cp36-jetson-jp421 pip wheel]
|}
+
|-
* Supports:  JetPack 4.2.x (Jetson Nano / TX2 / Xavier)
+
| v1.3.0 || [https://nvidia.box.com/v/torch-1-3-cp27-jetson-jp422 pip wheel] || [https://nvidia.box.com/v/torch-1-3-cp36-jetson-jp422 pip wheel]
 +
|-
 +
| v1.4.0 || [https://nvidia.box.com/v/torch-1-4-cp27-jetson-jp43 pip wheel] || [https://nvidia.box.com/v/torch-1-4-cp36-jetson-jp43 pip wheel]
 +
|}  
 +
* As per the [https://github.com/pytorch/pytorch/releases PyTorch Release Notes], Python 2 is not longer supported as of PyTorch v1.5 and newer.
 +
* Supports:  JetPack >= 4.2 (Jetson Nano / TX1 / TX2 / Xavier NX / AGX Xavier)
 
* Forum Topic:  [https://devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/ devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/]  
 
* Forum Topic:  [https://devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/ devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/]  
 
* Build from Source:  [https://devtalk.nvidia.com/default/topic/1049071/#5324123 https://devtalk.nvidia.com/default/topic/1049071/#5324123]
 
* Build from Source:  [https://devtalk.nvidia.com/default/topic/1049071/#5324123 https://devtalk.nvidia.com/default/topic/1049071/#5324123]
 
<span style="color: white; background-color: #7BCC70; padding: 5px">'''note''' — the PyTorch and Caffe2 projects have merged, so installing PyTorch will also install Caffe2</span>
 
<span style="color: white; background-color: #7BCC70; padding: 5px">'''note''' — the PyTorch and Caffe2 projects have merged, so installing PyTorch will also install Caffe2</span>
 
<source lang="bash">
 
<source lang="bash">
 +
# for PyTorch v1.4.0, install OpenBLAS
 +
$ sudo apt-get install libopenblas-base libopenmpi-dev
 +
 
# Python 2.7 (download pip wheel from above)
 
# Python 2.7 (download pip wheel from above)
$ pip install torch-1.2.0a0+8554416-cp27-cp27mu-linux_aarch64.whl
+
$ pip install future torch-1.4.0-cp27-cp27mu-linux_aarch64.whl
  
 
# Python 3.6 (download pip wheel from above)
 
# Python 3.6 (download pip wheel from above)
$ pip3 install numpy torch-1.2.0a0+8554416-cp36-cp36m-linux_aarch64.whl
+
$ pip3 install Cython
 +
$ pip3 install numpy torch-1.4.0-cp36-cp36m-linux_aarch64.whl
 +
</source>
 +
As per the the [https://github.com/pytorch/pytorch/releases/tag/v1.4.0 PyTorch 1.4 Release Notes], Python 2 support is now deprecated and PyTorch 1.4 is the last version to support Python 2.
 +
 
 +
== ONNX Runtime ==
 +
 
 +
[[File:ONNX-Runtime-logo.svg|200px|right]]
 +
 
 +
* Website: https://microsoft.github.io/onnxruntime/
 +
* Source: https://github.com/microsoft/onnxruntime
 +
* Container: [https://mcr.microsoft.com/azureml/onnxruntime:v.1.4.0-jetpack4.4-l4t-base-r32.4.3 mcr.microsoft.com/azureml/onnxruntime:v.1.4.0-jetpack4.4-l4t-base-r32.4.3]
 +
* Version: 1.4.0
 +
* Packages:
 +
** JetPack 4.4 (L4T R32.4.3) - [https://nvidia.box.com/shared/static/8sc6j25orjcpl6vhq3a4ir8v219fglng.whl 1.4.0 pip wheel (Python 3.6)]
 +
* Supports: JetPack >= 4.4 (Jetson Nano / TX1 / TX2 / Xavier NX / AGX Xavier)
 +
* Forum Support: https://github.com/microsoft/onnxruntime
 +
* Build from Source: Refer to [https://github.com/microsoft/onnxruntime/blob/master/BUILD.md#nvidia-jetson-tx1tx2nanoxavier these] instructions
 +
* ONNX Runtime 1.4.0 Install instructions
 +
<source lang="bash">
 +
# Download pip wheel from location mentioned above
 +
$ wget https://nvidia.box.com/shared/static/8sc6j25orjcpl6vhq3a4ir8v219fglng.whl -O onnxruntime_gpu-1.4.0-cp36-cp36m-linux_aarch64.whl
 +
 
 +
# Install pip wheel
 +
$ pip3 install onnxruntime_gpu-1.4.0-cp36-cp36m-linux_aarch64.whl
 
</source>
 
</source>
  
Line 73: Line 155:
 
* Website:  [https://mxnet.apache.org/ https://mxnet.apache.org/]
 
* Website:  [https://mxnet.apache.org/ https://mxnet.apache.org/]
 
* Source:  [https://github.com/apache/incubator-mxnet https://github.com/apache/incubator-mxnet]
 
* Source:  [https://github.com/apache/incubator-mxnet https://github.com/apache/incubator-mxnet]
* Version:  1.4
+
* Version:  1.4, 1.6, 1.7
* Packages:
+
* Packages:
** [https://drive.google.com/open?id=1ot6XtrV9r70wUzM13bpTzrGiJWlfUddV pip wheel] (Python 2.7)
+
{| class="wikitable" style="margin-left:25px"
** [https://drive.google.com/open?id=1jr-kP1_zlLa9tx-GtdlBV3Nn20qRJgzY pip wheel] (Python 3.6)
+
|-
* Supports:  JetPack 4.2.x (Jetson Nano / TX2 / Xavier)
+
! !! Python 2.7 !! Python 3.6
* Forum Topic: [https://devtalk.nvidia.com/default/topic/1049293/#5326170 https://devtalk.nvidia.com/default/topic/1049293/#5326170]
+
|-
* Build from Source: [https://devtalk.nvidia.com/default/topic/1049293/#5326119 https://devtalk.nvidia.com/default/topic/1049293/#5326119]
+
| v1.4 (JetPack 4.2.x) || [https://drive.google.com/open?id=1ot6XtrV9r70wUzM13bpTzrGiJWlfUddV pip wheel] || [https://drive.google.com/open?id=1jr-kP1_zlLa9tx-GtdlBV3Nn20qRJgzY pip wheel]
 +
|-
 +
| v1.6 (JetPack 4.3) || [https://drive.google.com/open?id=1i-wgDa8rVv-9l-iR8iEhWNSLt7A9bRwZ pip wheel] || [https://drive.google.com/open?id=1acFgoFaw9arP1I6VZFR3Jjsm6TNkpR0v pip wheel]
 +
|-
 +
| v1.7 (JetPack 4.4) || - || [https://forums.developer.nvidia.com/t/i-was-unable-to-compile-and-install-mxnet1-5-with-tensorrt-on-the-jetson-nano-is-there-someone-have-compile-it-please-help-me-thank-you/111303/27 forum post]
 +
|} 
 +
* Supports:  JetPack >= 4.2 (Jetson Nano / TX1 / TX2 / Xavier NX / AGX Xavier)
 +
* Forum Topics: [https://devtalk.nvidia.com/default/topic/1049293/#5326170 v1.4] | [https://forums.developer.nvidia.com/t/i-was-unable-to-compile-and-install-mxnet1-5-with-tensorrt-on-the-jetson-nano-is-there-someone-have-compile-it-please-help-me-thank-you/111303/3 v1.6] | [https://forums.developer.nvidia.com/t/i-was-unable-to-compile-and-install-mxnet1-5-with-tensorrt-on-the-jetson-nano-is-there-someone-have-compile-it-please-help-me-thank-you/111303/27 v1.7]
 +
* Build from Source: [https://forums.developer.nvidia.com/t/i-was-unable-to-compile-and-install-mxnet-on-the-jetson-nano-is-there-an-official-installation-tutorial/72259/8 v1.6] | [https://forums.developer.nvidia.com/t/i-was-unable-to-compile-and-install-mxnet1-5-with-tensorrt-on-the-jetson-nano-is-there-someone-have-compile-it-please-help-me-thank-you/111303/27 v1.7]
 +
 
 +
'''MXNet 1.7 Install Instructions:'''
  
 +
<source lang="bash">
 +
$ wget https://raw.githubusercontent.com/AastaNV/JEP/master/MXNET/autoinstall_mxnet.sh
 +
$ sudo chmod +x autoinstall_mxnet.sh
 +
$ ./autoinstall_mxnet.sh <Nano/TX1/TX2/Xavier>
 +
</source>
 +
 +
'''MXNet 1.4 / 1.6 Install Instructions:'''
 
<source lang="bash">
 
<source lang="bash">
 
# Python 2.7
 
# Python 2.7
Line 113: Line 212:
 
* Website:  [https://developer.nvidia.com/embedded/twodaystoademo https://developer.nvidia.com/embedded/twodaystoademo]
 
* Website:  [https://developer.nvidia.com/embedded/twodaystoademo https://developer.nvidia.com/embedded/twodaystoademo]
 
* Source:  [https://github.com/dusty-nv/jetson-inference https://github.com/dusty-nv/jetson-inference]
 
* Source:  [https://github.com/dusty-nv/jetson-inference https://github.com/dusty-nv/jetson-inference]
* Supports:  Jetson Nano, TX1, TX2, Xavier
+
* Supports:  Jetson Nano, TX1, TX2, Xavier NX, AGX Xavier
 
* Build from Source:
 
* Build from Source:
  
 
<source lang="bash">
 
<source lang="bash">
 
# download the repo
 
# download the repo
$ git clone https://github.com/dusty-nv/jetson-inference
+
$ git clone --recursive https://github.com/dusty-nv/jetson-inference
 
$ cd jetson-inference
 
$ cd jetson-inference
$ git submodule update --init
 
  
 
# configure build tree
 
# configure build tree
Line 128: Line 226:
  
 
# build and install
 
# build and install
$ make  
+
$ make -j$(nproc)
 
$ sudo make install
 
$ sudo make install
 +
$ sudo ldconfig
 
</source>
 
</source>
  
Line 148: Line 247:
 
{| class="wikitable" style="text-align: center; padding=0;"
 
{| class="wikitable" style="text-align: center; padding=0;"
 
! scope="col" style="width: 150px;" | Network  
 
! scope="col" style="width: 150px;" | Network  
! scope="col" style="width: 85px;" | Dataset  
+
! scope="col" style="width: 100px;" | Dataset  
 
! scope="col" style="width: 65px;" | Resolution  
 
! scope="col" style="width: 65px;" | Resolution  
 
! scope="col" style="width: 65px;" | Classes  
 
! scope="col" style="width: 65px;" | Classes  
Line 157: Line 256:
 
! scope="col" style="width: 65px;" | Original
 
! scope="col" style="width: 65px;" | Original
 
|-
 
|-
| AlexNet || ILSVRC12 || 224x224 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet BVLC]
+
| AlexNet || [http://image-net.org/challenges/LSVRC/2012/ ILSVRC12] || 224x224 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet BVLC]
 
|-
 
|-
| GoogleNet || ILSVRC12 || 224x224 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet BVLC]
+
| GoogleNet || [http://image-net.org/challenges/LSVRC/2012/ ILSVRC12] || 224x224 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet BVLC]
 
|-
 
|-
| ResNet-18 || ILSVRC15 || 224x224 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/HolmesShuan/ResNet-18-Caffemodel-on-ImageNet GitHub]
+
| ResNet-18 || [http://image-net.org/challenges/LSVRC/2015/ ILSVRC15] || 224x224 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/HolmesShuan/ResNet-18-Caffemodel-on-ImageNet GitHub]
 
|-
 
|-
| ResNet-50 || ILSVRC15 || 224x224 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/KaimingHe/deep-residual-networks GitHub]
+
| ResNet-50 || [http://image-net.org/challenges/LSVRC/2015/ ILSVRC15] || 224x224 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/KaimingHe/deep-residual-networks GitHub]
 
|-
 
|-
| ResNet-101 || ILSVRC15 || 224x224 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/KaimingHe/deep-residual-networks GitHub]
+
| ResNet-101 || [http://image-net.org/challenges/LSVRC/2015/ ILSVRC15] || 224x224 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/KaimingHe/deep-residual-networks GitHub]
 
|-
 
|-
| ResNet-152 || ILSVRC15 || 224x224 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/KaimingHe/deep-residual-networks GitHub]
+
| ResNet-152 || [http://image-net.org/challenges/LSVRC/2015/ ILSVRC15] || 224x224 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/KaimingHe/deep-residual-networks GitHub]
 
|-
 
|-
| VGG-16 || ILSVRC14 || 224x224 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://gist.github.com/ksimonyan/211839e770f7b538e2d8 GitHub]
+
| VGG-16 || [http://image-net.org/challenges/LSVRC/2014/ ILSVRC14] || 224x224 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://gist.github.com/ksimonyan/211839e770f7b538e2d8 GitHub]
 
|-
 
|-
| VGG-19 || ILSVRC14 || 224x224 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://gist.github.com/ksimonyan/3785162f95cd2d5fee77 GitHub]
+
| VGG-19 || [http://image-net.org/challenges/LSVRC/2014/ ILSVRC14] || 224x224 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://gist.github.com/ksimonyan/3785162f95cd2d5fee77 GitHub]
 
|-
 
|-
| Inception-v4 || ILSVRC12 || 299x299 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/SnailTyan/caffe-model-zoo/tree/master/Inception-v4 GitHub]
+
| Inception-v4 || [http://image-net.org/challenges/LSVRC/2012/ ILSVRC12] || 299x299 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/SnailTyan/caffe-model-zoo/tree/master/Inception-v4 GitHub]
 
|-
 
|-
| Inception-v4 || ILSVRC12 || 299x299 || 1000 || TensorFlow || <code>TF-TRT (UFF)</code> || {{Yes}} || [https://github.com/NVIDIA-AI-IOT/tf_trt_models tf_trt_models] || [https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models TF-slim]
+
| Inception-v4 || [http://image-net.org/challenges/LSVRC/2012/ ILSVRC12] || 299x299 || 1000 || TensorFlow || <code>TF-TRT (UFF)</code> || {{Yes}} || [https://github.com/NVIDIA-AI-IOT/tf_trt_models tf_trt_models] || [https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models TF-slim]
 
|-
 
|-
| Mobilenet-v1 || ILSVRC12 || 224x224 || 1000 || TensorFlow || <code>TF-TRT (UFF)</code> || {{Yes}} || [https://github.com/NVIDIA-AI-IOT/tf_trt_models tf_trt_models] || [https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models TF-slim]
+
| Mobilenet-v1 || [http://image-net.org/challenges/LSVRC/2012/ ILSVRC12] || 224x224 || 1000 || TensorFlow || <code>TF-TRT (UFF)</code> || {{Yes}} || [https://github.com/NVIDIA-AI-IOT/tf_trt_models tf_trt_models] || [https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models TF-slim]
 
|}
 
|}
  
Line 184: Line 283:
 
{| class="wikitable" style="text-align: center; padding=0;"
 
{| class="wikitable" style="text-align: center; padding=0;"
 
! scope="col" style="width: 150px;" | Network  
 
! scope="col" style="width: 150px;" | Network  
! scope="col" style="width: 85px;" | Dataset  
+
! scope="col" style="width: 100px;" | Dataset  
 
! scope="col" style="width: 65px;" | Resolution  
 
! scope="col" style="width: 65px;" | Resolution  
 
! scope="col" style="width: 65px;" | Classes  
 
! scope="col" style="width: 65px;" | Classes  
Line 193: Line 292:
 
! scope="col" style="width: 65px;" | Original
 
! scope="col" style="width: 65px;" | Original
 
|-
 
|-
| SSD-Mobilenet-v1 || COCO || 300x300 || 91 || TensorFlow || <code>UFF</code> || {{Yes}} || rowspan="3" | [[Jetson_Zoo#Hello_AI_World|Hello AI World]] <br /> <small>[https://github.com/AastaNV/TRT_object_detection TRT_object_detection]</small> || [https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models TF Zoo]
+
| SSD-Mobilenet-v1 || [http://cocodataset.org COCO] || 300x300 || 91 || TensorFlow || <code>UFF</code> || {{Yes}} || rowspan="3" | [[Jetson_Zoo#Hello_AI_World|Hello AI World]] <br /> <small>[https://github.com/AastaNV/TRT_object_detection TRT_object_detection]</small> || [https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models TF Zoo]
 
|-
 
|-
| SSD-Mobilenet-v2 || COCO || 300x300 || 91 || TensorFlow || <code>UFF</code> || {{Yes}} || [https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models TF Zoo]
+
| SSD-Mobilenet-v2 || [http://cocodataset.org COCO] || 300x300 || 91 || TensorFlow || <code>UFF</code> || {{Yes}} || [https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models TF Zoo]
 
|-
 
|-
| SSD-Inception-v2 || COCO || 300x300 || 91 || TensorFlow || <code>UFF</code> || {{Yes}} || [https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models TF Zoo]
+
| SSD-Inception-v2 || [http://cocodataset.org COCO] || 300x300 || 91 || TensorFlow || <code>UFF</code> || {{Yes}} || [https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models TF Zoo]
 
|-
 
|-
| YOLO-v2 || COCO || 608x608 || 80 || Darknet || <code>Custom</code> || {{Yes}} || [https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/blob/master/yolo/README.md#trt-yolo-app trt-yolo-app] || [https://pjreddie.com/darknet/yolo/ YOLO]
+
| YOLO-v2 || [http://cocodataset.org COCO] || 608x608 || 80 || Darknet || <code>Custom</code> || {{Yes}} || [https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/restructure/yolo#trt-yolo-app trt-yolo-app] || [https://pjreddie.com/darknet/yolo/ YOLO]
 
|-
 
|-
| YOLO-v3 || COCO || 608x608 || 80 || Darknet || <code>Custom</code> || {{Yes}} || [https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/blob/master/yolo/README.md#trt-yolo-app trt-yolo-app] || [https://pjreddie.com/darknet/yolo/ YOLO]
+
| YOLO-v3 || [http://cocodataset.org COCO] || 608x608 || 80 || Darknet || <code>Custom</code> || {{Yes}} || [https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/restructure/yolo#trt-yolo-app trt-yolo-app] || [https://pjreddie.com/darknet/yolo/ YOLO]
 
|-
 
|-
| Tiny YOLO-v3 || COCO || 416x416 || 80 || Darknet || <code>Custom</code> || {{Yes}} || [https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/blob/master/yolo/README.md#trt-yolo-app trt-yolo-app] || [https://pjreddie.com/darknet/yolo/ YOLO]
+
| Tiny YOLO-v3 || [http://cocodataset.org COCO] || 416x416 || 80 || Darknet || <code>Custom</code> || {{Yes}} || [https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/restructure/yolo#trt-yolo-app trt-yolo-app] || [https://pjreddie.com/darknet/yolo/ YOLO]
 
|-
 
|-
| Tiny YOLO-v3 || COCO || 416x416 || 80 || Darknet || <code>Custom</code> || {{Yes}} || [https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/blob/master/yolo/README.md#trt-yolo-app trt-yolo-app] || [https://pjreddie.com/darknet/yolo/ YOLO]
+
| Tiny YOLO-v3 || [http://cocodataset.org COCO] || 416x416 || 80 || Darknet || <code>Custom</code> || {{Yes}} || [https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/restructure/yolo#trt-yolo-app trt-yolo-app] || [https://pjreddie.com/darknet/yolo/ YOLO]
 
|-
 
|-
| Faster-RCNN || Pascal VOC || 500x375 || 21 || Caffe || <code>caffemodel</code> || {{Yes}} || [https://docs.nvidia.com/deeplearning/sdk/tensorrt-sample-support-guide/index.html#fasterrcnn_sample TensorRT sample] || [https://github.com/rbgirshick/py-faster-rcnn GitHub]
+
| Faster-RCNN || [http://host.robots.ox.ac.uk/pascal/VOC/ Pascal VOC] || 500x375 || 21 || Caffe || <code>caffemodel</code> || {{Yes}} || [https://docs.nvidia.com/deeplearning/sdk/tensorrt-sample-support-guide/index.html#fasterrcnn_sample TensorRT sample] || [https://github.com/rbgirshick/py-faster-rcnn GitHub]
 
|}
 
|}
  
Line 214: Line 313:
 
{| class="wikitable" style="text-align: center; padding=0;"
 
{| class="wikitable" style="text-align: center; padding=0;"
 
! scope="col" style="width: 150px;" | Network  
 
! scope="col" style="width: 150px;" | Network  
! scope="col" style="width: 85px;" | Dataset  
+
! scope="col" style="width: 100px;" | Dataset  
 
! scope="col" style="width: 65px;" | Resolution  
 
! scope="col" style="width: 65px;" | Resolution  
 
! scope="col" style="width: 65px;" | Classes  
 
! scope="col" style="width: 65px;" | Classes  
Line 223: Line 322:
 
! scope="col" style="width: 65px;" | Original
 
! scope="col" style="width: 65px;" | Original
 
|-
 
|-
| FCN-Alexnet || Cityscapes || 2048x2048 || 21 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-pretrained.md#generating-pretrained-fcn-alexnet GitHub]
+
| FCN-ResNet18 || [https://www.cityscapes-dataset.com/ Cityscapes] || 2048x1024 || 21 || PyTorch || <code>ONNX</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md GitHub]
 +
|-
 +
| FCN-ResNet18 || [https://www.cityscapes-dataset.com/ Cityscapes] || 1024x512 || 21 || PyTorch || <code>ONNX</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md GitHub]
 +
|-
 +
| FCN-ResNet18 || [https://www.cityscapes-dataset.com/ Cityscapes] || 512x256 || 21 || PyTorch || <code>ONNX</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md GitHub]
 +
|-
 +
| FCN-ResNet18 || [http://deepscene.cs.uni-freiburg.de/ DeepScene] || 864x480 || 5 || PyTorch || <code>ONNX</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md GitHub]
 
|-
 
|-
| FCN-Alexnet || Cityscapes || 1024x1024 || 21 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-pretrained.md#generating-pretrained-fcn-alexnet GitHub]
+
| FCN-ResNet18 || [http://deepscene.cs.uni-freiburg.de/ DeepScene] || 576x320 || 5 || PyTorch || <code>ONNX</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md GitHub]
 
|-
 
|-
| FCN-Alexnet || Pascal VOC || 500x356 || 21 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-pretrained.md#generating-pretrained-fcn-alexnet GitHub]
+
| FCN-ResNet18 || [http://deepscene.cs.uni-freiburg.de/ DeepScene] || 864x480 || 5 || PyTorch || <code>ONNX</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md GitHub]
 +
|-
 +
| FCN-ResNet18 || [https://lv-mhp.github.io/ Multi-Human] || 640x360 || 21 || PyTorch || <code>ONNX</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md GitHub]
 +
|-
 +
| FCN-ResNet18 || [https://lv-mhp.github.io/ Multi-Human] || 512x320 || 21 || PyTorch || <code>ONNX</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md GitHub]
 +
|-
 +
| FCN-ResNet18 || [http://host.robots.ox.ac.uk/pascal/VOC/ Pascal VOC] || 512x320 || 21 || PyTorch || <code>ONNX</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md GitHub]
 +
|-
 +
| FCN-ResNet18 || [http://host.robots.ox.ac.uk/pascal/VOC/ Pascal VOC] || 320x320 || 21 || PyTorch || <code>ONNX</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md GitHub]
 +
|-
 +
| FCN-ResNet18 || [http://rgbd.cs.princeton.edu/ SUN RGB-D] || 640x512 || 21 || PyTorch || <code>ONNX</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md GitHub]
 +
|-
 +
| FCN-ResNet18 || [http://rgbd.cs.princeton.edu/ SUN RGB-D] || 512x400 || 21 || PyTorch || <code>ONNX</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md GitHub]
 +
|-
 +
| FCN-Alexnet || [https://www.cityscapes-dataset.com/ Cityscapes] || 2048x1024 || 21 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-pretrained.md#generating-pretrained-fcn-alexnet GitHub]
 +
|-
 +
| FCN-Alexnet || [https://www.cityscapes-dataset.com/ Cityscapes] || 1024x512 || 21 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-pretrained.md#generating-pretrained-fcn-alexnet GitHub]
 +
|-
 +
| FCN-Alexnet || [http://host.robots.ox.ac.uk/pascal/VOC/ Pascal VOC] || 500x356 || 21 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-pretrained.md#generating-pretrained-fcn-alexnet GitHub]
 
|-
 
|-
 
| U-Net || Carvana || 512x512 || 1 || TensorFlow || <code>UFF</code> || {{Yes}} || [https://devtalk.nvidia.com/default/topic/1050377/jetson-nano/deep-learning-inference-benchmarking-instructions/ Nano Benchmarks] || [https://github.com/lyatdawn/Unet-Tensorflow GitHub]
 
| U-Net || Carvana || 512x512 || 1 || TensorFlow || <code>UFF</code> || {{Yes}} || [https://devtalk.nvidia.com/default/topic/1050377/jetson-nano/deep-learning-inference-benchmarking-instructions/ Nano Benchmarks] || [https://github.com/lyatdawn/Unet-Tensorflow GitHub]
 +
|}
 +
 +
=== Pose Estimation ===
 +
 +
{| class="wikitable" style="text-align: center; padding=0;"
 +
! scope="col" style="width: 150px;" | Network
 +
! scope="col" style="width: 100px;" | Dataset
 +
! scope="col" style="width: 65px;" | Resolution
 +
! scope="col" style="width: 65px;" | Classes
 +
! scope="col" style="width: 65px;" | Framework
 +
! scope="col" style="width: 110px;" | Format
 +
! scope="col" style="width: 65px;" | TensorRT
 +
! scope="col" style="width: 150px;" | Samples
 +
! scope="col" style="width: 65px;" | Original
 +
|-
 +
| ResNet18_att || [http://cocodataset.org COCO] || 224x224 || 16 || PyTorch || torch2trt || {{Yes}} || [https://github.com/NVIDIA-AI-IOT/trt_pose trt_pose] || [https://github.com/NVIDIA-AI-IOT/trt_pose GitHub]
 +
|-
 +
| DenseNet121_att || [http://cocodataset.org COCO] || 224x224 || 16 || PyTorch || torch2trt || {{Yes}} || [https://github.com/NVIDIA-AI-IOT/trt_pose trt_pose] || [https://github.com/NVIDIA-AI-IOT/trt_pose GitHub]
 
|}
 
|}
  
Line 240: Line 381:
 
* Website:  [https://opencv.org/ https://opencv.org/]
 
* Website:  [https://opencv.org/ https://opencv.org/]
 
* Source:  [https://github.com/opencv/opencv https://github.com/opencv/opencv]
 
* Source:  [https://github.com/opencv/opencv https://github.com/opencv/opencv]
* Version:  3.3.1
+
* Version:  3.3.1 (JetPack <= 4.2.x), 4.1 (JetPack 4.3, JetPack 4.4)
* Supports:  Jetson Nano / TX1 / TX2 / Xavier
+
* Supports:  Jetson Nano / TX1 / TX2 / Xavier NX / AGX Xavier
  
* OpenCV 3.3.1 is included with JetPack, compiled with support for GStreamer.  To build a newer version or to enable CUDA support, see these guides:
+
* OpenCV is included with JetPack, compiled with support for GStreamer.  To build a newer version or to enable CUDA support, see these guides:
 
** [https://github.com/mdegans/nano_build_opencv nano_build_opencv] (GitHub)
 
** [https://github.com/mdegans/nano_build_opencv nano_build_opencv] (GitHub)
 
** [https://jkjung-avt.github.io/opencv-on-nano/ Installing OpenCV 3.4.6]
 
** [https://jkjung-avt.github.io/opencv-on-nano/ Installing OpenCV 3.4.6]
Line 256: Line 397:
 
* Source:  [https://github.com/ros https://github.com/ros]
 
* Source:  [https://github.com/ros https://github.com/ros]
 
* Version:  ROS Melodic
 
* Version:  ROS Melodic
* Supports:  JetPack 4.2.x (Jetson Nano / TX2 / Xavier)
+
* Supports:  JetPack >= 4.2 (Jetson Nano / TX1 / TX2 / Xavier NX / AGX Xavier)
 
* Installation:  [http://wiki.ros.org/melodic/Installation/Ubuntu http://wiki.ros.org/melodic/Installation/Ubuntu]
 
* Installation:  [http://wiki.ros.org/melodic/Installation/Ubuntu http://wiki.ros.org/melodic/Installation/Ubuntu]
  
Line 282: Line 423:
  
 
* Website:  [https://developer.nvidia.com/isaac-sdk https://developer.nvidia.com/isaac-sdk]
 
* Website:  [https://developer.nvidia.com/isaac-sdk https://developer.nvidia.com/isaac-sdk]
* Version:  2019.2
+
* Version:  2019.2, 2019.3, 2020.1
* Supports:  JetPack 4.2.x (Jetson Nano / TX2 / Xavier)
+
* Supports:  JetPack 4.2.x, JetPack 4.3 (Jetson Nano / TX2 / Xavier)
 
* Downloads: [https://developer.nvidia.com/isaac/downloads https://developer.nvidia.com/isaac/downloads]
 
* Downloads: [https://developer.nvidia.com/isaac/downloads https://developer.nvidia.com/isaac/downloads]
 
* Documentation:  [https://docs.nvidia.com/isaac https://docs.nvidia.com/isaac]
 
* Documentation:  [https://docs.nvidia.com/isaac https://docs.nvidia.com/isaac]
Line 289: Line 430:
 
=== Isaac SIM ===
 
=== Isaac SIM ===
  
* Downloads: [https://developer.nvidia.com/isaac/downloads https://developer.nvidia.com/isaac/downloads]
+
* Website: [https://developer.nvidia.com/isaac-sdk https://developer.nvidia.com/isaac-sdk]
 
* Documentation:  [http://docs.nvidia.com/isaac/isaac_sim/index.html http://docs.nvidia.com/isaac/isaac_sim/index.html]
 
* Documentation:  [http://docs.nvidia.com/isaac/isaac_sim/index.html http://docs.nvidia.com/isaac/isaac_sim/index.html]
  
Line 300: Line 441:
 
* Source:  [https://github.com/aws/aws-greengrass-core-sdk-c https://github.com/aws/aws-greengrass-core-sdk-c]
 
* Source:  [https://github.com/aws/aws-greengrass-core-sdk-c https://github.com/aws/aws-greengrass-core-sdk-c]
 
* Version:  v1.9.1
 
* Version:  v1.9.1
* Supports:  JetPack 4.2.x (Jetson Nano / TX2 / Xavier)
+
* Supports:  JetPack 4.2.x, JetPack 4.3 (Jetson Nano / TX1 / TX2 / Xavier)
 
* Forum Thread:  [https://devtalk.nvidia.com/default/topic/1052324/jetson-nano/jetson-nano-aws-greengrass-/post/5341970/#5341970 https://devtalk.nvidia.com/default/topic/1052324/#5341970]
 
* Forum Thread:  [https://devtalk.nvidia.com/default/topic/1052324/jetson-nano/jetson-nano-aws-greengrass-/post/5341970/#5341970 https://devtalk.nvidia.com/default/topic/1052324/#5341970]
  
Line 340: Line 481:
  
 
* Website:  [https://developer.nvidia.com/deepstream-sdk https://developer.nvidia.com/deepstream-sdk]
 
* Website:  [https://developer.nvidia.com/deepstream-sdk https://developer.nvidia.com/deepstream-sdk]
* Version:  4.0.1
+
* Version:  5.0 (Developer Preview)
* Supports:  JetPack 4.2.2 (Jetson Nano / TX2 / Xavier)
+
* Supports:  JetPack >= 4.2 (Jetson Nano / TX1 / TX2 / Xavier NX / AGX Xavier)
 
* FAQ:  [https://developer.nvidia.com/deepstream-faq https://developer.nvidia.com/deepstream-faq]
 
* FAQ:  [https://developer.nvidia.com/deepstream-faq https://developer.nvidia.com/deepstream-faq]
 
* GitHub Samples:
 
* GitHub Samples:
Line 356: Line 497:
 
* Source:  [https://github.com/docker https://github.com/docker]
 
* Source:  [https://github.com/docker https://github.com/docker]
 
* Version:  18.06
 
* Version:  18.06
* Support:  ≥ JetPack 3.2 (Jetson Nano / TX1 / TX2 / Xavier)
+
* Support:  ≥ JetPack 3.2 (Jetson Nano / TX1 / TX2 / Xavier NX / AGX Xavier)
 
* Installed by default in JetPack-L4T
 
* Installed by default in JetPack-L4T
  
Line 385: Line 526:
 
* Website:  [https://kubernetes.io/ https://kubernetes.io/]
 
* Website:  [https://kubernetes.io/ https://kubernetes.io/]
 
* Source:  [https://github.com/kubernetes/ https://github.com/kubernetes/]
 
* Source:  [https://github.com/kubernetes/ https://github.com/kubernetes/]
* Support:  ≥ JetPack 3.2 (Jetson Nano / TX1 / TX2 / Xavier)
+
* Support:  ≥ JetPack 3.2 (Jetson Nano / TX1 / TX2 / Xavier NX / AGX Xavier)
 
* Distributions:
 
* Distributions:
 
** [https://microk8s.io/docs/ MicroK8s] (v1.14) {{spaces|1}} <code>$ sudo snap install microk8s --classic</code>
 
** [https://microk8s.io/docs/ MicroK8s] (v1.14) {{spaces|1}} <code>$ sudo snap install microk8s --classic</code>
 
** [https://github.com/rancher/k3s k3s] (v0.5.0) {{spaces|9}} <code>$ wget https://github.com/rancher/k3s/releases/download/v0.5.0/k3s-arm64</code>
 
** [https://github.com/rancher/k3s k3s] (v0.5.0) {{spaces|9}} <code>$ wget https://github.com/rancher/k3s/releases/download/v0.5.0/k3s-arm64</code>
  
To configure L4T kernel for K8S:  [https://medium.com/@jerry_liang/deploy-gpu-enabled-kubernetes-pod-on-nvidia-jetson-nano-ce738e3bcda9 https://medium.com/@jerry_liang/deploy-gpu-enabled-kubernetes-pod-on-nvidia-jetson-nano-ce738e3bcda9]
+
To configure L4T kernel for K8S:  [https://medium.com/@jerry_liang/deploy-gpu-enabled-kubernetes-pod-on-nvidia-jetson-nano-ce738e3bcda9 https://medium.com/@jerry_liang/deploy-gpu-enabled-kubernetes-pod-on-nvidia-jetson-nano-ce738e3bcda9] </br>
 +
See also:  [https://medium.com/jit-team/building-a-gpu-enabled-kubernets-cluster-for-machine-learning-with-nvidia-jetson-nano-7b67de74172a https://medium.com/jit-team/building-a-gpu-enabled-kubernets-cluster-for-machine-learning-with-nvidia-jetson-nano-7b67de74172a]

Latest revision as of 18:23, 12 August 2020

This page contains instructions for installing various open source add-on packages and frameworks on NVIDIA Jetson, in addition to a collection of DNN models for inferencing.

Below are links to container images and precompiled binaries built for aarch64 (arm64) architecture\. These are intended to be installed on top of JetPack.

Note that JetPack comes with various pre-installed components such as the L4T kernel, CUDA Toolkit, cuDNN, TensorRT, VisionWorks, OpenCV, GStreamer, Docker, and more.

Machine Learning

Jetson is able to natively run the full versions of popular machine learning frameworks, including TensorFlow, PyTorch, Caffe2, Keras, and MXNet.

There are also helpful deep learning examples and tutorials available, created specifically for Jetson - like Hello AI World and JetBot.

Docker Containers

NGC containers.png

There are ready-to-use ML and data science containers for Jetson hosted on NVIDIA GPU Cloud (NGC), including the following:

  • l4t-tensorflow - TensorFlow for JetPack 4.4
  • l4t-pytorch - PyTorch for JetPack 4.4
  • l4t-ml - TensorFlow, PyTorch, scikit-learn, scipy, pandas, JupyterLab, ect.

If you wish to modify them, the Dockerfiles and build scripts for these containers can be found on GitHub.

There are also following ready-to-use ML containers for Jetson hosted by our partners:

These containers are highly recommended to reduce the installation time of the frameworks below, and for beginners getting started.


TensorFlow

TensorFlow Logo.png
# install prerequisites
$ sudo apt-get install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev liblapack-dev libblas-dev gfortran

# install and upgrade pip3
$ sudo apt-get install python3-pip
$ sudo pip3 install -U pip testresources setuptools

# install the following python packages
$ sudo pip3 install -U numpy==1.16.1 future==0.17.1 mock==3.0.5 h5py==2.9.0 keras_preprocessing==1.0.5 keras_applications==1.0.8 gast==0.2.2 futures protobuf pybind11

# to install TensorFlow 1.15 for JetPack 4.4:
$ sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 ‘tensorflow<2# or install the latest version of TensorFlow (2.2) for JetPack 4.4:
$ sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v44 tensorflow

PyTorch (Caffe2)

PyTorch Logo.png

       JetPack 4.4 (L4T R32.4.3)

       * PyTorch v1.6.0 pip wheel (Python 3.6)

       JetPack 4.4 Developer Preview (L4T R32.4.2)

Python 2.7 Python 3.6
v1.2.0 pip wheel pip wheel
v1.3.0 pip wheel pip wheel
v1.4.0 pip wheel pip wheel
v1.5.0 pip wheel

       JetPack 4.2 / 4.3

Python 2.7 Python 3.6
v1.0.0 pip wheel pip wheel
v1.1.0 pip wheel pip wheel
v1.2.0 pip wheel pip wheel
v1.3.0 pip wheel pip wheel
v1.4.0 pip wheel pip wheel

note — the PyTorch and Caffe2 projects have merged, so installing PyTorch will also install Caffe2

# for PyTorch v1.4.0, install OpenBLAS
$ sudo apt-get install libopenblas-base libopenmpi-dev

# Python 2.7 (download pip wheel from above)
$ pip install future torch-1.4.0-cp27-cp27mu-linux_aarch64.whl

# Python 3.6 (download pip wheel from above)
$ pip3 install Cython
$ pip3 install numpy torch-1.4.0-cp36-cp36m-linux_aarch64.whl

As per the the PyTorch 1.4 Release Notes, Python 2 support is now deprecated and PyTorch 1.4 is the last version to support Python 2.

ONNX Runtime

ONNX-Runtime-logo.svg
# Download pip wheel from location mentioned above
$ wget https://nvidia.box.com/shared/static/8sc6j25orjcpl6vhq3a4ir8v219fglng.whl -O onnxruntime_gpu-1.4.0-cp36-cp36m-linux_aarch64.whl

# Install pip wheel
$ pip3 install onnxruntime_gpu-1.4.0-cp36-cp36m-linux_aarch64.whl

MXNet

MXNet Logo.png
Python 2.7 Python 3.6
v1.4 (JetPack 4.2.x) pip wheel pip wheel
v1.6 (JetPack 4.3) pip wheel pip wheel
v1.7 (JetPack 4.4) - forum post
  • Supports: JetPack >= 4.2 (Jetson Nano / TX1 / TX2 / Xavier NX / AGX Xavier)
  • Forum Topics: v1.4 | v1.6 | v1.7
  • Build from Source: v1.6 | v1.7

MXNet 1.7 Install Instructions:

$ wget https://raw.githubusercontent.com/AastaNV/JEP/master/MXNET/autoinstall_mxnet.sh
$ sudo chmod +x autoinstall_mxnet.sh
$ ./autoinstall_mxnet.sh <Nano/TX1/TX2/Xavier>

MXNet 1.4 / 1.6 Install Instructions:

# Python 2.7
sudo apt-get install -y git build-essential libatlas-base-dev libopencv-dev graphviz python-pip
sudo pip install mxnet-1.4.0-cp27-cp27mu-linux_aarch64.whl

# Python 3.6
sudo apt-get install -y git build-essential libatlas-base-dev libopencv-dev graphviz python3-pip
sudo pip install mxnet-1.4.0-cp36-cp36m-linux_aarch64.whl

Keras

Keras Logo.png

First, install TensorFlow from above.

# beforehand, install TensorFlow (https://eLinux.org/Jetson_Zoo#TensorFlow)
$ sudo apt-get install -y build-essential libatlas-base-dev gfortran
$ sudo pip install keras

Hello AI World

Hello-AI-World-CV.png
# download the repo
$ git clone --recursive https://github.com/dusty-nv/jetson-inference
$ cd jetson-inference

# configure build tree
$ mkdir build
$ cd build
$ cmake ../

# build and install
$ make -j$(nproc)
$ sudo make install
$ sudo ldconfig

Model Zoo

Below are various DNN models for inferencing on Jetson with support for TensorRT. Included are links to code samples with the model and the original source.

Note that many other models are able to run natively on Jetson by using the Machine Learning frameworks like those listed above.

For performance benchmarks, see these resources:

Classification

Network Dataset Resolution Classes Framework Format TensorRT Samples Original
AlexNet ILSVRC12 224x224 1000 Caffe caffemodel Yes Hello AI World BVLC
GoogleNet ILSVRC12 224x224 1000 Caffe caffemodel Yes Hello AI World BVLC
ResNet-18 ILSVRC15 224x224 1000 Caffe caffemodel Yes Hello AI World GitHub
ResNet-50 ILSVRC15 224x224 1000 Caffe caffemodel Yes Hello AI World GitHub
ResNet-101 ILSVRC15 224x224 1000 Caffe caffemodel Yes Hello AI World GitHub
ResNet-152 ILSVRC15 224x224 1000 Caffe caffemodel Yes Hello AI World GitHub
VGG-16 ILSVRC14 224x224 1000 Caffe caffemodel Yes Hello AI World GitHub
VGG-19 ILSVRC14 224x224 1000 Caffe caffemodel Yes Hello AI World GitHub
Inception-v4 ILSVRC12 299x299 1000 Caffe caffemodel Yes Hello AI World GitHub
Inception-v4 ILSVRC12 299x299 1000 TensorFlow TF-TRT (UFF) Yes tf_trt_models TF-slim
Mobilenet-v1 ILSVRC12 224x224 1000 TensorFlow TF-TRT (UFF) Yes tf_trt_models TF-slim

Object Detection

Network Dataset Resolution Classes Framework Format TensorRT Samples Original
SSD-Mobilenet-v1 COCO 300x300 91 TensorFlow UFF Yes Hello AI World
TRT_object_detection
TF Zoo
SSD-Mobilenet-v2 COCO 300x300 91 TensorFlow UFF Yes TF Zoo
SSD-Inception-v2 COCO 300x300 91 TensorFlow UFF Yes TF Zoo
YOLO-v2 COCO 608x608 80 Darknet Custom Yes trt-yolo-app YOLO
YOLO-v3 COCO 608x608 80 Darknet Custom Yes trt-yolo-app YOLO
Tiny YOLO-v3 COCO 416x416 80 Darknet Custom Yes trt-yolo-app YOLO
Tiny YOLO-v3 COCO 416x416 80 Darknet Custom Yes trt-yolo-app YOLO
Faster-RCNN Pascal VOC 500x375 21 Caffe caffemodel Yes TensorRT sample GitHub

Segmentation

Network Dataset Resolution Classes Framework Format TensorRT Samples Original
FCN-ResNet18 Cityscapes 2048x1024 21 PyTorch ONNX Yes Hello AI World GitHub
FCN-ResNet18 Cityscapes 1024x512 21 PyTorch ONNX Yes Hello AI World GitHub
FCN-ResNet18 Cityscapes 512x256 21 PyTorch ONNX Yes Hello AI World GitHub
FCN-ResNet18 DeepScene 864x480 5 PyTorch ONNX Yes Hello AI World GitHub
FCN-ResNet18 DeepScene 576x320 5 PyTorch ONNX Yes Hello AI World GitHub
FCN-ResNet18 DeepScene 864x480 5 PyTorch ONNX Yes Hello AI World GitHub
FCN-ResNet18 Multi-Human 640x360 21 PyTorch ONNX Yes Hello AI World GitHub
FCN-ResNet18 Multi-Human 512x320 21 PyTorch ONNX Yes Hello AI World GitHub
FCN-ResNet18 Pascal VOC 512x320 21 PyTorch ONNX Yes Hello AI World GitHub
FCN-ResNet18 Pascal VOC 320x320 21 PyTorch ONNX Yes Hello AI World GitHub
FCN-ResNet18 SUN RGB-D 640x512 21 PyTorch ONNX Yes Hello AI World GitHub
FCN-ResNet18 SUN RGB-D 512x400 21 PyTorch ONNX Yes Hello AI World GitHub
FCN-Alexnet Cityscapes 2048x1024 21 Caffe caffemodel Yes Hello AI World GitHub
FCN-Alexnet Cityscapes 1024x512 21 Caffe caffemodel Yes Hello AI World GitHub
FCN-Alexnet Pascal VOC 500x356 21 Caffe caffemodel Yes Hello AI World GitHub
U-Net Carvana 512x512 1 TensorFlow UFF Yes Nano Benchmarks GitHub

Pose Estimation

Network Dataset Resolution Classes Framework Format TensorRT Samples Original
ResNet18_att COCO 224x224 16 PyTorch torch2trt Yes trt_pose GitHub
DenseNet121_att COCO 224x224 16 PyTorch torch2trt Yes trt_pose GitHub

Computer Vision

OpenCV

OpenCV Logo.png

Robotics

ROS

Ros logo.png
# enable all Ubuntu packages:
$ sudo apt-add-repository universe
$ sudo apt-add-repository multiverse
$ sudo apt-add-repository restricted

# add ROS repository to apt sources
$ sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
$ sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654

# install ROS Base
$ sudo apt-get update
$ sudo apt-get install ros-melodic-ros-base

# add ROS paths to environment
sudo sh -c 'echo "source /opt/ros/melodic/setup.bash" >> ~/.bashrc'

NVIDIA Isaac SDK

Isaac Gems.jpg

Isaac SIM

IoT / Edge

AWS Greengrass

Greengrass Logo.png

1. Create Greengrass user group:

$ sudo adduser --system ggc_user
$ sudo addgroup --system ggc_group

2. Setup your AWS account and Greengrass group during this page: https://docs.aws.amazon.com/greengrass/latest/developerguide/gg-config.html
    After downloading your unique security resource keys to your Jetson that were created in this step, proceed to #3 below.

3. Download the AWS IoT Greengrass Core Software (v1.9.1) for ARMv8 (aarch64):

$ wget https://d1onfpft10uf5o.cloudfront.net/greengrass-core/downloads/1.9.1/greengrass-linux-aarch64-1.9.1.tar.gz

4. Following step #4 from this page, extract Greengrass core and your unique security keys on your Jetson:

$ sudo tar -xzvf greengrass-linux-aarch64-1.9.1.tar.gz -C /
$ sudo tar -xzvf <hash>-setup.tar.gz -C /greengrass   # these are the security keys downloaded above

5. Download AWS ATS endpoint root certificate (CA):

$ cd /greengrass/certs/
$ sudo wget -O root.ca.pem https://www.amazontrust.com/repository/AmazonRootCA1.pem

6. Start Greengrass core on your Jetson:

$ cd /greengrass/ggc/core/
$ sudo ./greengrassd start

You should get a message in your terminal Greengrass sucessfully started with PID: xxx

NVIDIA DeepStream

DeepStream 30 Stream.png

Containers

Docker

Docker Logo.png

To enable GPU passthrough, enable access to these device nodes with the --device flag when launching Docker containers:

/dev/nvhost-ctrl
/dev/nvhost-ctrl-gpu
/dev/nvhost-prof-gpu
/dev/nvmap
/dev/nvhost-gpu
/dev/nvhost-as-gpu

The /usr/lib/aarch64-linux-gnu/tegra directory also needs mounted.

Below is an example command line for launching Docker with access to the GPU:

docker run --device=/dev/nvhost-ctrl --device=/dev/nvhost-ctrl-gpu --device=/dev/nvhost-prof-gpu --device=/dev/nvmap --device=/dev/nvhost-gpu --device=/dev/nvhost-as-gpu -v /usr/lib/aarch64-linux-gnu/tegra:/usr/lib/aarch64-linux-gnu/tegra <container-name>

To enable IPVLAN for Docker Swarm mode: https://blog.hypriot.com/post/nvidia-jetson-nano-build-kernel-docker-optimized/

Kubernetes

Kubernetes-Logo.png

To configure L4T kernel for K8S: https://medium.com/@jerry_liang/deploy-gpu-enabled-kubernetes-pod-on-nvidia-jetson-nano-ce738e3bcda9
See also: https://medium.com/jit-team/building-a-gpu-enabled-kubernets-cluster-for-machine-learning-with-nvidia-jetson-nano-7b67de74172a