Difference between revisions of "Jetson Zoo"
m (→PyTorch (Caffe2)) |
m (→Object Detection) |
||
(54 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
This page contains instructions for installing various open source add-on packages and frameworks on [[Jetson|NVIDIA Jetson]], in addition to a collection of DNN models for inferencing. | This page contains instructions for installing various open source add-on packages and frameworks on [[Jetson|NVIDIA Jetson]], in addition to a collection of DNN models for inferencing. | ||
− | Below are links to precompiled binaries built for aarch64 (arm64) architecture, including support for CUDA where applicable. These are intended to be installed on top of [https://developer.nvidia.com/embedded/jetpack JetPack]. | + | Below are links to precompiled binaries built for aarch64 (arm64) architecture, including support for CUDA where applicable. These are intended to be installed on top of [https://developer.nvidia.com/embedded/jetpack JetPack]. |
− | + | Note that JetPack comes with various pre-installed components such as the L4T kernel, CUDA Toolkit, cuDNN, TensorRT, VisionWorks, OpenCV, GStreamer, Docker, and more. | |
+ | |||
+ | For the latest updates and support, refer to the listed forum topics. Feel free to contribute to the list below if you know of software packages that are working & tested on Jetson. | ||
= Machine Learning = | = Machine Learning = | ||
Line 13: | Line 15: | ||
== TensorFlow == | == TensorFlow == | ||
− | [[File:TensorFlow_Logo.png| | + | [[File:TensorFlow_Logo.png|215px|right]] |
* Website: [https://www.tensorflow.org/ https://tensorflow.org] | * Website: [https://www.tensorflow.org/ https://tensorflow.org] | ||
* Source: [https://github.com/tensorflow/tensorflow https://github.com/tensorflow/tensorflow] | * Source: [https://github.com/tensorflow/tensorflow https://github.com/tensorflow/tensorflow] | ||
* Version: 1.13.1 | * Version: 1.13.1 | ||
* Packages: [https://developer.download.nvidia.com/compute/redist/jp/v42/tensorflow-gpu/tensorflow_gpu-1.13.1+nv19.5-cp36-cp36m-linux_aarch64.whl pip wheel] (Python 3.6) | * Packages: [https://developer.download.nvidia.com/compute/redist/jp/v42/tensorflow-gpu/tensorflow_gpu-1.13.1+nv19.5-cp36-cp36m-linux_aarch64.whl pip wheel] (Python 3.6) | ||
− | * Supports: JetPack 4.2 (Jetson Nano / TX2 / Xavier) | + | * Supports: JetPack 4.2.x (Jetson Nano / TX2 / Xavier) |
− | * Install Guide: [https://docs.nvidia.com/deeplearning/frameworks/install-tf- | + | * Install Guide: [https://docs.nvidia.com/deeplearning/frameworks/install-tf-jetson-platform/index.html#prereqs] |
* Forum Topic: [https://devtalk.nvidia.com/default/topic/1048776/jetson-nano/official-tensorflow-for-jetson-nano-/ devtalk.nvidia.com/default/topic/1048776/jetson-nano/official-tensorflow-for-jetson-nano-/] | * Forum Topic: [https://devtalk.nvidia.com/default/topic/1048776/jetson-nano/official-tensorflow-for-jetson-nano-/ devtalk.nvidia.com/default/topic/1048776/jetson-nano/official-tensorflow-for-jetson-nano-/] | ||
+ | * Build from Source: [https://devtalk.nvidia.com/default/topic/1055131/jetson-agx-xavier/building-tensorflow-1-13-on-jetson-xavier/ https://devtalk.nvidia.com/default/topic/1055131/jetson-agx-xavier/building-tensorflow-1-13-on-jetson-xavier/] | ||
<source lang="bash"> | <source lang="bash"> | ||
# install prerequisites | # install prerequisites | ||
− | $ sudo apt-get install libhdf5-serial-dev hdf5-tools libhdf5-dev python3-pip | + | $ sudo apt-get install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev |
− | $ pip3 install --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu | + | |
+ | # install and upgrade pip3 | ||
+ | $ sudo apt-get install python3-pip | ||
+ | $ sudo pip3 install -U pip | ||
+ | |||
+ | # install the following python packages | ||
+ | $ sudo pip3 install -U numpy grpcio absl-py py-cpuinfo psutil portpicker six mock requests gast h5py astor termcolor protobuf keras-applications keras-preprocessing wrapt google-pasta | ||
+ | |||
+ | # install the latest version of TensorFlow | ||
+ | $ sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu | ||
</source> | </source> | ||
Line 32: | Line 44: | ||
* Website: [https://pytorch.org/ https://pytorch.org/] | * Website: [https://pytorch.org/ https://pytorch.org/] | ||
* Source: [https://github.com/pytorch/pytorch https://github.com/pytorch/pytorch] | * Source: [https://github.com/pytorch/pytorch https://github.com/pytorch/pytorch] | ||
− | * Version: PyTorch v1.0.0 - v1. | + | * Version: PyTorch v1.0.0 - v1.3.0 |
* Packages: | * Packages: | ||
{| class="wikitable" style="margin-left:25px" | {| class="wikitable" style="margin-left:25px" | ||
Line 41: | Line 53: | ||
|- | |- | ||
| v1.1.0 || [https://nvidia.box.com/v/torch-1-1-cp27-jetson-jp42 pip wheel] || [https://nvidia.box.com/v/torch-1-1-cp36-jetson-jp42 pip wheel] | | v1.1.0 || [https://nvidia.box.com/v/torch-1-1-cp27-jetson-jp42 pip wheel] || [https://nvidia.box.com/v/torch-1-1-cp36-jetson-jp42 pip wheel] | ||
+ | |- | ||
+ | | v1.2.0 || [https://nvidia.box.com/v/torch-1-2-cp27-jetson-jp421 pip wheel] || [https://nvidia.box.com/v/torch-1-2-cp36-jetson-jp421 pip wheel] | ||
+ | |- | ||
+ | | v1.3.0 || [https://nvidia.box.com/v/torch-1-3-cp27-jetson-jp422 pip wheel] || [https://nvidia.box.com/v/torch-1-3-cp36-jetson-jp422 pip wheel] | ||
|} | |} | ||
− | * Supports: JetPack 4.2 (Jetson Nano / TX2 / Xavier) | + | * Supports: JetPack 4.2.x (Jetson Nano / TX2 / Xavier) |
* Forum Topic: [https://devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/ devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/] | * Forum Topic: [https://devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/ devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/] | ||
* Build from Source: [https://devtalk.nvidia.com/default/topic/1049071/#5324123 https://devtalk.nvidia.com/default/topic/1049071/#5324123] | * Build from Source: [https://devtalk.nvidia.com/default/topic/1049071/#5324123 https://devtalk.nvidia.com/default/topic/1049071/#5324123] | ||
Line 48: | Line 64: | ||
<source lang="bash"> | <source lang="bash"> | ||
# Python 2.7 (download pip wheel from above) | # Python 2.7 (download pip wheel from above) | ||
− | $ pip install torch-1. | + | $ pip install torch-1.3.0-cp27-cp27mu-linux_aarch64.whl |
# Python 3.6 (download pip wheel from above) | # Python 3.6 (download pip wheel from above) | ||
− | $ pip3 install numpy torch-1. | + | $ pip3 install numpy torch-1.3.0-cp36-cp36m-linux_aarch64.whl |
</source> | </source> | ||
Line 63: | Line 79: | ||
** [https://drive.google.com/open?id=1ot6XtrV9r70wUzM13bpTzrGiJWlfUddV pip wheel] (Python 2.7) | ** [https://drive.google.com/open?id=1ot6XtrV9r70wUzM13bpTzrGiJWlfUddV pip wheel] (Python 2.7) | ||
** [https://drive.google.com/open?id=1jr-kP1_zlLa9tx-GtdlBV3Nn20qRJgzY pip wheel] (Python 3.6) | ** [https://drive.google.com/open?id=1jr-kP1_zlLa9tx-GtdlBV3Nn20qRJgzY pip wheel] (Python 3.6) | ||
− | * Supports: JetPack 4.2 (Jetson Nano / TX2 / Xavier) | + | * Supports: JetPack 4.2.x (Jetson Nano / TX2 / Xavier) |
* Forum Topic: [https://devtalk.nvidia.com/default/topic/1049293/#5326170 https://devtalk.nvidia.com/default/topic/1049293/#5326170] | * Forum Topic: [https://devtalk.nvidia.com/default/topic/1049293/#5326170 https://devtalk.nvidia.com/default/topic/1049293/#5326170] | ||
* Build from Source: [https://devtalk.nvidia.com/default/topic/1049293/#5326119 https://devtalk.nvidia.com/default/topic/1049293/#5326119] | * Build from Source: [https://devtalk.nvidia.com/default/topic/1049293/#5326119 https://devtalk.nvidia.com/default/topic/1049293/#5326119] | ||
Line 83: | Line 99: | ||
* Source: [https://github.com/keras-team/keras https://github.com/keras-team/keras] | * Source: [https://github.com/keras-team/keras https://github.com/keras-team/keras] | ||
* Version: 2.2.4 | * Version: 2.2.4 | ||
− | * Forum Topic: [https://devtalk.nvidia.com/default/topic/1049362 | + | * Forum Topic: [https://devtalk.nvidia.com/default/topic/1049362/#5325752 https://devtalk.nvidia.com/default/topic/1049362/#5325752] |
First, [[Jetson_Zoo#TensorFlow|install TensorFlow]] from above. | First, [[Jetson_Zoo#TensorFlow|install TensorFlow]] from above. | ||
<source lang="bash"> | <source lang="bash"> | ||
− | # beforehand, install TensorFlow (https:// | + | # beforehand, install TensorFlow (https://eLinux.org/Jetson_Zoo#TensorFlow) |
− | $ sudo apt-get install -y build-essential libatlas-base-dev | + | $ sudo apt-get install -y build-essential libatlas-base-dev gfortran |
$ sudo pip install keras | $ sudo pip install keras | ||
</source> | </source> | ||
+ | == Hello AI World == | ||
+ | |||
+ | [[File:Hello-AI-World-CV.png|375px|right]] | ||
− | == Hello AI World == | + | * Website: [https://developer.nvidia.com/embedded/twodaystoademo https://developer.nvidia.com/embedded/twodaystoademo] |
+ | * Source: [https://github.com/dusty-nv/jetson-inference https://github.com/dusty-nv/jetson-inference] | ||
+ | * Supports: Jetson Nano, TX1, TX2, Xavier | ||
+ | * Build from Source: | ||
+ | |||
+ | <source lang="bash"> | ||
+ | # download the repo | ||
+ | $ git clone https://github.com/dusty-nv/jetson-inference | ||
+ | $ cd jetson-inference | ||
+ | $ git submodule update --init | ||
+ | |||
+ | # configure build tree | ||
+ | $ mkdir build | ||
+ | $ cd build | ||
+ | $ cmake ../ | ||
+ | |||
+ | # build and install | ||
+ | $ make | ||
+ | $ sudo make install | ||
+ | $ sudo ldconfig | ||
+ | </source> | ||
+ | |||
+ | == Model Zoo == | ||
+ | |||
+ | Below are various DNN models for inferencing on Jetson with support for TensorRT. Included are links to code samples with the model and the original source. | ||
+ | |||
+ | Note that many other models are able to run natively on Jetson by using the [[Jetson_Zoo#Machine_Learning|Machine Learning]] frameworks like those listed above. | ||
+ | |||
+ | For performance benchmarks, see these resources: | ||
+ | |||
+ | * [https://developer.nvidia.com/embedded/jetson-nano-dl-inference-benchmarks Jetson Nano Deep Learning Inference Benchmarks] | ||
+ | * Jetson TX1/TX2 - [https://www.nvidia.com/en-us/data-center/resources/inference-technical-overview/ NVIDIA AI Inference Technical Overview] | ||
+ | * [https://developer.nvidia.com/embedded/jetson-agx-xavier-dl-inference-benchmarks Jetson AGX Xavier Deep Learning Inference Benchmarks] | ||
+ | |||
+ | === Classification === | ||
+ | |||
+ | {| class="wikitable" style="text-align: center; padding=0;" | ||
+ | ! scope="col" style="width: 150px;" | Network | ||
+ | ! scope="col" style="width: 100px;" | Dataset | ||
+ | ! scope="col" style="width: 65px;" | Resolution | ||
+ | ! scope="col" style="width: 65px;" | Classes | ||
+ | ! scope="col" style="width: 65px;" | Framework | ||
+ | ! scope="col" style="width: 110px;" | Format | ||
+ | ! scope="col" style="width: 65px;" | TensorRT | ||
+ | ! scope="col" style="width: 150px;" | Samples | ||
+ | ! scope="col" style="width: 65px;" | Original | ||
+ | |- | ||
+ | | AlexNet || [http://image-net.org/challenges/LSVRC/2012/ ILSVRC12] || 224x224 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet BVLC] | ||
+ | |- | ||
+ | | GoogleNet || [http://image-net.org/challenges/LSVRC/2012/ ILSVRC12] || 224x224 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet BVLC] | ||
+ | |- | ||
+ | | ResNet-18 || [http://image-net.org/challenges/LSVRC/2015/ ILSVRC15] || 224x224 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/HolmesShuan/ResNet-18-Caffemodel-on-ImageNet GitHub] | ||
+ | |- | ||
+ | | ResNet-50 || [http://image-net.org/challenges/LSVRC/2015/ ILSVRC15] || 224x224 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/KaimingHe/deep-residual-networks GitHub] | ||
+ | |- | ||
+ | | ResNet-101 || [http://image-net.org/challenges/LSVRC/2015/ ILSVRC15] || 224x224 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/KaimingHe/deep-residual-networks GitHub] | ||
+ | |- | ||
+ | | ResNet-152 || [http://image-net.org/challenges/LSVRC/2015/ ILSVRC15] || 224x224 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/KaimingHe/deep-residual-networks GitHub] | ||
+ | |- | ||
+ | | VGG-16 || [http://image-net.org/challenges/LSVRC/2014/ ILSVRC14] || 224x224 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://gist.github.com/ksimonyan/211839e770f7b538e2d8 GitHub] | ||
+ | |- | ||
+ | | VGG-19 || [http://image-net.org/challenges/LSVRC/2014/ ILSVRC14] || 224x224 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://gist.github.com/ksimonyan/3785162f95cd2d5fee77 GitHub] | ||
+ | |- | ||
+ | | Inception-v4 || [http://image-net.org/challenges/LSVRC/2012/ ILSVRC12] || 299x299 || 1000 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/SnailTyan/caffe-model-zoo/tree/master/Inception-v4 GitHub] | ||
+ | |- | ||
+ | | Inception-v4 || [http://image-net.org/challenges/LSVRC/2012/ ILSVRC12] || 299x299 || 1000 || TensorFlow || <code>TF-TRT (UFF)</code> || {{Yes}} || [https://github.com/NVIDIA-AI-IOT/tf_trt_models tf_trt_models] || [https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models TF-slim] | ||
+ | |- | ||
+ | | Mobilenet-v1 || [http://image-net.org/challenges/LSVRC/2012/ ILSVRC12] || 224x224 || 1000 || TensorFlow || <code>TF-TRT (UFF)</code> || {{Yes}} || [https://github.com/NVIDIA-AI-IOT/tf_trt_models tf_trt_models] || [https://github.com/tensorflow/models/tree/master/research/slim#pre-trained-models TF-slim] | ||
+ | |} | ||
+ | |||
+ | === Object Detection === | ||
+ | |||
+ | {| class="wikitable" style="text-align: center; padding=0;" | ||
+ | ! scope="col" style="width: 150px;" | Network | ||
+ | ! scope="col" style="width: 100px;" | Dataset | ||
+ | ! scope="col" style="width: 65px;" | Resolution | ||
+ | ! scope="col" style="width: 65px;" | Classes | ||
+ | ! scope="col" style="width: 65px;" | Framework | ||
+ | ! scope="col" style="width: 110px;" | Format | ||
+ | ! scope="col" style="width: 65px;" | TensorRT | ||
+ | ! scope="col" style="width: 150px;" | Samples | ||
+ | ! scope="col" style="width: 65px;" | Original | ||
+ | |- | ||
+ | | SSD-Mobilenet-v1 || [http://cocodataset.org COCO] || 300x300 || 91 || TensorFlow || <code>UFF</code> || {{Yes}} || rowspan="3" | [[Jetson_Zoo#Hello_AI_World|Hello AI World]] <br /> <small>[https://github.com/AastaNV/TRT_object_detection TRT_object_detection]</small> || [https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models TF Zoo] | ||
+ | |- | ||
+ | | SSD-Mobilenet-v2 || [http://cocodataset.org COCO] || 300x300 || 91 || TensorFlow || <code>UFF</code> || {{Yes}} || [https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models TF Zoo] | ||
+ | |- | ||
+ | | SSD-Inception-v2 || [http://cocodataset.org COCO] || 300x300 || 91 || TensorFlow || <code>UFF</code> || {{Yes}} || [https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md#coco-trained-models TF Zoo] | ||
+ | |- | ||
+ | | YOLO-v2 || [http://cocodataset.org COCO] || 608x608 || 80 || Darknet || <code>Custom</code> || {{Yes}} || [https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/restructure/yolo#trt-yolo-app trt-yolo-app] || [https://pjreddie.com/darknet/yolo/ YOLO] | ||
+ | |- | ||
+ | | YOLO-v3 || [http://cocodataset.org COCO] || 608x608 || 80 || Darknet || <code>Custom</code> || {{Yes}} || [https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/restructure/yolo#trt-yolo-app trt-yolo-app] || [https://pjreddie.com/darknet/yolo/ YOLO] | ||
+ | |- | ||
+ | | Tiny YOLO-v3 || [http://cocodataset.org COCO] || 416x416 || 80 || Darknet || <code>Custom</code> || {{Yes}} || [https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/restructure/yolo#trt-yolo-app trt-yolo-app] || [https://pjreddie.com/darknet/yolo/ YOLO] | ||
+ | |- | ||
+ | | Tiny YOLO-v3 || [http://cocodataset.org COCO] || 416x416 || 80 || Darknet || <code>Custom</code> || {{Yes}} || [https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/tree/restructure/yolo#trt-yolo-app trt-yolo-app] || [https://pjreddie.com/darknet/yolo/ YOLO] | ||
+ | |- | ||
+ | | Faster-RCNN || [http://host.robots.ox.ac.uk/pascal/VOC/ Pascal VOC] || 500x375 || 21 || Caffe || <code>caffemodel</code> || {{Yes}} || [https://docs.nvidia.com/deeplearning/sdk/tensorrt-sample-support-guide/index.html#fasterrcnn_sample TensorRT sample] || [https://github.com/rbgirshick/py-faster-rcnn GitHub] | ||
+ | |} | ||
+ | |||
+ | === Segmentation === | ||
+ | |||
+ | {| class="wikitable" style="text-align: center; padding=0;" | ||
+ | ! scope="col" style="width: 150px;" | Network | ||
+ | ! scope="col" style="width: 100px;" | Dataset | ||
+ | ! scope="col" style="width: 65px;" | Resolution | ||
+ | ! scope="col" style="width: 65px;" | Classes | ||
+ | ! scope="col" style="width: 65px;" | Framework | ||
+ | ! scope="col" style="width: 110px;" | Format | ||
+ | ! scope="col" style="width: 65px;" | TensorRT | ||
+ | ! scope="col" style="width: 150px;" | Samples | ||
+ | ! scope="col" style="width: 65px;" | Original | ||
+ | |- | ||
+ | | FCN-ResNet18 || [https://www.cityscapes-dataset.com/ Cityscapes] || 2048x1024 || 21 || PyTorch || <code>ONNX</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md GitHub] | ||
+ | |- | ||
+ | | FCN-ResNet18 || [https://www.cityscapes-dataset.com/ Cityscapes] || 1024x512 || 21 || PyTorch || <code>ONNX</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md GitHub] | ||
+ | |- | ||
+ | | FCN-ResNet18 || [https://www.cityscapes-dataset.com/ Cityscapes] || 512x256 || 21 || PyTorch || <code>ONNX</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md GitHub] | ||
+ | |- | ||
+ | | FCN-ResNet18 || [http://deepscene.cs.uni-freiburg.de/ DeepScene] || 864x480 || 5 || PyTorch || <code>ONNX</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md GitHub] | ||
+ | |- | ||
+ | | FCN-ResNet18 || [http://deepscene.cs.uni-freiburg.de/ DeepScene] || 576x320 || 5 || PyTorch || <code>ONNX</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md GitHub] | ||
+ | |- | ||
+ | | FCN-ResNet18 || [http://deepscene.cs.uni-freiburg.de/ DeepScene] || 864x480 || 5 || PyTorch || <code>ONNX</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md GitHub] | ||
+ | |- | ||
+ | | FCN-ResNet18 || [https://lv-mhp.github.io/ Multi-Human] || 640x360 || 21 || PyTorch || <code>ONNX</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md GitHub] | ||
+ | |- | ||
+ | | FCN-ResNet18 || [https://lv-mhp.github.io/ Multi-Human] || 512x320 || 21 || PyTorch || <code>ONNX</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md GitHub] | ||
+ | |- | ||
+ | | FCN-ResNet18 || [http://host.robots.ox.ac.uk/pascal/VOC/ Pascal VOC] || 512x320 || 21 || PyTorch || <code>ONNX</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md GitHub] | ||
+ | |- | ||
+ | | FCN-ResNet18 || [http://host.robots.ox.ac.uk/pascal/VOC/ Pascal VOC] || 320x320 || 21 || PyTorch || <code>ONNX</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md GitHub] | ||
+ | |- | ||
+ | | FCN-ResNet18 || [http://rgbd.cs.princeton.edu/ SUN RGB-D] || 640x512 || 21 || PyTorch || <code>ONNX</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md GitHub] | ||
+ | |- | ||
+ | | FCN-ResNet18 || [http://rgbd.cs.princeton.edu/ SUN RGB-D] || 512x400 || 21 || PyTorch || <code>ONNX</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md GitHub] | ||
+ | |- | ||
+ | | FCN-Alexnet || [https://www.cityscapes-dataset.com/ Cityscapes] || 2048x1024 || 21 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-pretrained.md#generating-pretrained-fcn-alexnet GitHub] | ||
+ | |- | ||
+ | | FCN-Alexnet || [https://www.cityscapes-dataset.com/ Cityscapes] || 1024x512 || 21 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-pretrained.md#generating-pretrained-fcn-alexnet GitHub] | ||
+ | |- | ||
+ | | FCN-Alexnet || [http://host.robots.ox.ac.uk/pascal/VOC/ Pascal VOC] || 500x356 || 21 || Caffe || <code>caffemodel</code> || {{Yes}} || [[Jetson_Zoo#Hello_AI_World|Hello AI World]] || [https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-pretrained.md#generating-pretrained-fcn-alexnet GitHub] | ||
+ | |- | ||
+ | | U-Net || Carvana || 512x512 || 1 || TensorFlow || <code>UFF</code> || {{Yes}} || [https://devtalk.nvidia.com/default/topic/1050377/jetson-nano/deep-learning-inference-benchmarking-instructions/ Nano Benchmarks] || [https://github.com/lyatdawn/Unet-Tensorflow GitHub] | ||
+ | |} | ||
+ | |||
+ | = Computer Vision = | ||
+ | |||
+ | == OpenCV == | ||
+ | |||
+ | [[File:OpenCV_Logo.png|215px|right]] | ||
+ | |||
+ | * Website: [https://opencv.org/ https://opencv.org/] | ||
+ | * Source: [https://github.com/opencv/opencv https://github.com/opencv/opencv] | ||
+ | * Version: 3.3.1 | ||
+ | * Supports: Jetson Nano / TX1 / TX2 / Xavier | ||
+ | |||
+ | * OpenCV 3.3.1 is included with JetPack, compiled with support for GStreamer. To build a newer version or to enable CUDA support, see these guides: | ||
+ | ** [https://github.com/mdegans/nano_build_opencv nano_build_opencv] (GitHub) | ||
+ | ** [https://jkjung-avt.github.io/opencv-on-nano/ Installing OpenCV 3.4.6] | ||
= Robotics = | = Robotics = | ||
== ROS == | == ROS == | ||
+ | |||
+ | [[File:Ros_logo.png|200px|right]] | ||
+ | |||
+ | * Website: [http://ros.org/ http://ros.org/] | ||
+ | * Source: [https://github.com/ros https://github.com/ros] | ||
+ | * Version: ROS Melodic | ||
+ | * Supports: JetPack 4.2.x (Jetson Nano / TX2 / Xavier) | ||
+ | * Installation: [http://wiki.ros.org/melodic/Installation/Ubuntu http://wiki.ros.org/melodic/Installation/Ubuntu] | ||
+ | |||
+ | <source lang="bash"> | ||
+ | # enable all Ubuntu packages: | ||
+ | $ sudo apt-add-repository universe | ||
+ | $ sudo apt-add-repository multiverse | ||
+ | $ sudo apt-add-repository restricted | ||
+ | |||
+ | # add ROS repository to apt sources | ||
+ | $ sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list' | ||
+ | $ sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654 | ||
+ | |||
+ | # install ROS Base | ||
+ | $ sudo apt-get update | ||
+ | $ sudo apt-get install ros-melodic-ros-base | ||
+ | |||
+ | # add ROS paths to environment | ||
+ | sudo sh -c 'echo "source /opt/ros/melodic/setup.bash" >> ~/.bashrc' | ||
+ | </source> | ||
+ | |||
+ | == NVIDIA Isaac SDK == | ||
+ | |||
+ | [[File:Isaac_Gems.jpg|300px|right]] | ||
+ | |||
+ | * Website: [https://developer.nvidia.com/isaac-sdk https://developer.nvidia.com/isaac-sdk] | ||
+ | * Version: 2019.2 | ||
+ | * Supports: JetPack 4.2.x (Jetson Nano / TX2 / Xavier) | ||
+ | * Downloads: [https://developer.nvidia.com/isaac/downloads https://developer.nvidia.com/isaac/downloads] | ||
+ | * Documentation: [https://docs.nvidia.com/isaac https://docs.nvidia.com/isaac] | ||
+ | |||
+ | === Isaac SIM === | ||
+ | |||
+ | * Downloads: [https://developer.nvidia.com/isaac/downloads https://developer.nvidia.com/isaac/downloads] | ||
+ | * Documentation: [http://docs.nvidia.com/isaac/isaac_sim/index.html http://docs.nvidia.com/isaac/isaac_sim/index.html] | ||
= IoT / Edge = | = IoT / Edge = | ||
== AWS Greengrass == | == AWS Greengrass == | ||
+ | [[File:Greengrass_Logo.png|250px|right]] | ||
+ | |||
+ | * Website: [https://aws.amazon.com/greengrass/ https://aws.amazon.com/greengrass/] | ||
+ | * Source: [https://github.com/aws/aws-greengrass-core-sdk-c https://github.com/aws/aws-greengrass-core-sdk-c] | ||
+ | * Version: v1.9.1 | ||
+ | * Supports: JetPack 4.2.x (Jetson Nano / TX2 / Xavier) | ||
+ | * Forum Thread: [https://devtalk.nvidia.com/default/topic/1052324/jetson-nano/jetson-nano-aws-greengrass-/post/5341970/#5341970 https://devtalk.nvidia.com/default/topic/1052324/#5341970] | ||
+ | |||
+ | 1. Create Greengrass user group: | ||
+ | <source lang="bash"> | ||
+ | $ sudo adduser --system ggc_user | ||
+ | $ sudo addgroup --system ggc_group | ||
+ | </source> | ||
+ | |||
+ | 2. Setup your AWS account and Greengrass group during this page: [https://docs.aws.amazon.com/greengrass/latest/developerguide/gg-config.html https://docs.aws.amazon.com/greengrass/latest/developerguide/gg-config.html] <br /> | ||
+ | {{spaces|3}} After downloading your unique security resource keys to your Jetson that were created in this step, proceed to #3 below. | ||
+ | |||
+ | 3. Download the [https://docs.aws.amazon.com/greengrass/latest/developerguide/what-is-gg.html#gg-core-download-tab AWS IoT Greengrass Core Software] (v1.9.1) for ARMv8 (aarch64): | ||
+ | <source lang="bash"> | ||
+ | $ wget https://d1onfpft10uf5o.cloudfront.net/greengrass-core/downloads/1.9.1/greengrass-linux-aarch64-1.9.1.tar.gz | ||
+ | </source> | ||
+ | |||
+ | 4. Following step #4 from [https://docs.aws.amazon.com/greengrass/latest/developerguide/gg-device-start.html this] page, extract Greengrass core and your unique security keys on your Jetson: | ||
+ | <source lang="bash"> | ||
+ | $ sudo tar -xzvf greengrass-linux-aarch64-1.9.1.tar.gz -C / | ||
+ | $ sudo tar -xzvf <hash>-setup.tar.gz -C /greengrass # these are the security keys downloaded above | ||
+ | </source> | ||
+ | |||
+ | 5. Download AWS ATS endpoint root certificate (CA): | ||
+ | <source lang="bash"> | ||
+ | $ cd /greengrass/certs/ | ||
+ | $ sudo wget -O root.ca.pem https://www.amazontrust.com/repository/AmazonRootCA1.pem | ||
+ | </source> | ||
+ | |||
+ | 6. Start Greengrass core on your Jetson: | ||
+ | <source lang="bash"> | ||
+ | $ cd /greengrass/ggc/core/ | ||
+ | $ sudo ./greengrassd start | ||
+ | </source> | ||
+ | You should get a message in your terminal <code>Greengrass sucessfully started with PID: xxx</code> | ||
+ | |||
+ | == NVIDIA DeepStream == | ||
+ | [[File:DeepStream_30_Stream.png|300px|right]] | ||
+ | |||
+ | * Website: [https://developer.nvidia.com/deepstream-sdk https://developer.nvidia.com/deepstream-sdk] | ||
+ | * Version: 4.0.1 | ||
+ | * Supports: JetPack 4.2.2 (Jetson Nano / TX2 / Xavier) | ||
+ | * FAQ: [https://developer.nvidia.com/deepstream-faq https://developer.nvidia.com/deepstream-faq] | ||
+ | * GitHub Samples: | ||
+ | ** [https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps deepstream_reference_apps] | ||
+ | ** [https://github.com/NVIDIA-AI-IOT/redaction_with_deepstream redaction_with_deepstream] | ||
+ | ** [https://github.com/NVIDIA-AI-IOT/deepstream_360_d_smart_parking_application 360° Smart Parking Application] | ||
= Containers = | = Containers = | ||
== Docker == | == Docker == | ||
+ | [[File:Docker_Logo.png|265px|right]] | ||
+ | |||
+ | * Website: [https://www.docker.com/ https://docker.com/] | ||
+ | * Source: [https://github.com/docker https://github.com/docker] | ||
+ | * Version: 18.06 | ||
+ | * Support: ≥ JetPack 3.2 (Jetson Nano / TX1 / TX2 / Xavier) | ||
+ | * Installed by default in JetPack-L4T | ||
+ | |||
+ | To enable GPU passthrough, enable access to these device nodes with the <code>--device</code> flag when launching Docker containers: | ||
+ | |||
+ | <source lang="bash"> | ||
+ | /dev/nvhost-ctrl | ||
+ | /dev/nvhost-ctrl-gpu | ||
+ | /dev/nvhost-prof-gpu | ||
+ | /dev/nvmap | ||
+ | /dev/nvhost-gpu | ||
+ | /dev/nvhost-as-gpu | ||
+ | </source> | ||
+ | |||
+ | The <code>/usr/lib/aarch64-linux-gnu/tegra</code> directory also needs mounted. | ||
+ | |||
+ | Below is an example command line for launching Docker with access to the GPU: | ||
+ | |||
+ | <source lang="bash"> | ||
+ | docker run --device=/dev/nvhost-ctrl --device=/dev/nvhost-ctrl-gpu --device=/dev/nvhost-prof-gpu --device=/dev/nvmap --device=/dev/nvhost-gpu --device=/dev/nvhost-as-gpu -v /usr/lib/aarch64-linux-gnu/tegra:/usr/lib/aarch64-linux-gnu/tegra <container-name> | ||
+ | </source> | ||
+ | |||
+ | To enable IPVLAN for Docker Swarm mode: [https://blog.hypriot.com/post/nvidia-jetson-nano-build-kernel-docker-optimized/ https://blog.hypriot.com/post/nvidia-jetson-nano-build-kernel-docker-optimized/] | ||
== Kubernetes == | == Kubernetes == | ||
+ | [[File:Kubernetes-Logo.png|265px|right]] | ||
+ | |||
+ | * Website: [https://kubernetes.io/ https://kubernetes.io/] | ||
+ | * Source: [https://github.com/kubernetes/ https://github.com/kubernetes/] | ||
+ | * Support: ≥ JetPack 3.2 (Jetson Nano / TX1 / TX2 / Xavier) | ||
+ | * Distributions: | ||
+ | ** [https://microk8s.io/docs/ MicroK8s] (v1.14) {{spaces|1}} <code>$ sudo snap install microk8s --classic</code> | ||
+ | ** [https://github.com/rancher/k3s k3s] (v0.5.0) {{spaces|9}} <code>$ wget https://github.com/rancher/k3s/releases/download/v0.5.0/k3s-arm64</code> | ||
+ | |||
+ | To configure L4T kernel for K8S: [https://medium.com/@jerry_liang/deploy-gpu-enabled-kubernetes-pod-on-nvidia-jetson-nano-ce738e3bcda9 https://medium.com/@jerry_liang/deploy-gpu-enabled-kubernetes-pod-on-nvidia-jetson-nano-ce738e3bcda9] |
Revision as of 07:39, 31 October 2019
This page contains instructions for installing various open source add-on packages and frameworks on NVIDIA Jetson, in addition to a collection of DNN models for inferencing.
Below are links to precompiled binaries built for aarch64 (arm64) architecture, including support for CUDA where applicable. These are intended to be installed on top of JetPack.
Note that JetPack comes with various pre-installed components such as the L4T kernel, CUDA Toolkit, cuDNN, TensorRT, VisionWorks, OpenCV, GStreamer, Docker, and more.
For the latest updates and support, refer to the listed forum topics. Feel free to contribute to the list below if you know of software packages that are working & tested on Jetson.
Machine Learning
Jetson is able to natively run the full versions of popular machine learning frameworks, including TensorFlow, PyTorch, Caffe2, Keras, and MXNet.
There are also helpful deep learning examples and tutorials available, created specifically for Jetson - like Hello AI World and JetBot.
TensorFlow
- Website: https://tensorflow.org
- Source: https://github.com/tensorflow/tensorflow
- Version: 1.13.1
- Packages: pip wheel (Python 3.6)
- Supports: JetPack 4.2.x (Jetson Nano / TX2 / Xavier)
- Install Guide: [1]
- Forum Topic: devtalk.nvidia.com/default/topic/1048776/jetson-nano/official-tensorflow-for-jetson-nano-/
- Build from Source: https://devtalk.nvidia.com/default/topic/1055131/jetson-agx-xavier/building-tensorflow-1-13-on-jetson-xavier/
# install prerequisites
$ sudo apt-get install libhdf5-serial-dev hdf5-tools libhdf5-dev zlib1g-dev zip libjpeg8-dev
# install and upgrade pip3
$ sudo apt-get install python3-pip
$ sudo pip3 install -U pip
# install the following python packages
$ sudo pip3 install -U numpy grpcio absl-py py-cpuinfo psutil portpicker six mock requests gast h5py astor termcolor protobuf keras-applications keras-preprocessing wrapt google-pasta
# install the latest version of TensorFlow
$ sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu
PyTorch (Caffe2)
- Website: https://pytorch.org/
- Source: https://github.com/pytorch/pytorch
- Version: PyTorch v1.0.0 - v1.3.0
- Packages:
Python 2.7 | Python 3.6 | |
---|---|---|
v1.0.0 | pip wheel | pip wheel |
v1.1.0 | pip wheel | pip wheel |
v1.2.0 | pip wheel | pip wheel |
v1.3.0 | pip wheel | pip wheel |
- Supports: JetPack 4.2.x (Jetson Nano / TX2 / Xavier)
- Forum Topic: devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/
- Build from Source: https://devtalk.nvidia.com/default/topic/1049071/#5324123
note — the PyTorch and Caffe2 projects have merged, so installing PyTorch will also install Caffe2
# Python 2.7 (download pip wheel from above)
$ pip install torch-1.3.0-cp27-cp27mu-linux_aarch64.whl
# Python 3.6 (download pip wheel from above)
$ pip3 install numpy torch-1.3.0-cp36-cp36m-linux_aarch64.whl
MXNet
- Website: https://mxnet.apache.org/
- Source: https://github.com/apache/incubator-mxnet
- Version: 1.4
- Packages:
- Supports: JetPack 4.2.x (Jetson Nano / TX2 / Xavier)
- Forum Topic: https://devtalk.nvidia.com/default/topic/1049293/#5326170
- Build from Source: https://devtalk.nvidia.com/default/topic/1049293/#5326119
# Python 2.7
sudo apt-get install -y git build-essential libatlas-base-dev libopencv-dev graphviz python-pip
sudo pip install mxnet-1.4.0-cp27-cp27mu-linux_aarch64.whl
# Python 3.6
sudo apt-get install -y git build-essential libatlas-base-dev libopencv-dev graphviz python3-pip
sudo pip install mxnet-1.4.0-cp36-cp36m-linux_aarch64.whl
Keras
- Website: https://keras.io/
- Source: https://github.com/keras-team/keras
- Version: 2.2.4
- Forum Topic: https://devtalk.nvidia.com/default/topic/1049362/#5325752
First, install TensorFlow from above.
# beforehand, install TensorFlow (https://eLinux.org/Jetson_Zoo#TensorFlow)
$ sudo apt-get install -y build-essential libatlas-base-dev gfortran
$ sudo pip install keras
Hello AI World
- Website: https://developer.nvidia.com/embedded/twodaystoademo
- Source: https://github.com/dusty-nv/jetson-inference
- Supports: Jetson Nano, TX1, TX2, Xavier
- Build from Source:
# download the repo
$ git clone https://github.com/dusty-nv/jetson-inference
$ cd jetson-inference
$ git submodule update --init
# configure build tree
$ mkdir build
$ cd build
$ cmake ../
# build and install
$ make
$ sudo make install
$ sudo ldconfig
Model Zoo
Below are various DNN models for inferencing on Jetson with support for TensorRT. Included are links to code samples with the model and the original source.
Note that many other models are able to run natively on Jetson by using the Machine Learning frameworks like those listed above.
For performance benchmarks, see these resources:
- Jetson Nano Deep Learning Inference Benchmarks
- Jetson TX1/TX2 - NVIDIA AI Inference Technical Overview
- Jetson AGX Xavier Deep Learning Inference Benchmarks
Classification
Network | Dataset | Resolution | Classes | Framework | Format | TensorRT | Samples | Original |
---|---|---|---|---|---|---|---|---|
AlexNet | ILSVRC12 | 224x224 | 1000 | Caffe | caffemodel |
Yes | Hello AI World | BVLC |
GoogleNet | ILSVRC12 | 224x224 | 1000 | Caffe | caffemodel |
Yes | Hello AI World | BVLC |
ResNet-18 | ILSVRC15 | 224x224 | 1000 | Caffe | caffemodel |
Yes | Hello AI World | GitHub |
ResNet-50 | ILSVRC15 | 224x224 | 1000 | Caffe | caffemodel |
Yes | Hello AI World | GitHub |
ResNet-101 | ILSVRC15 | 224x224 | 1000 | Caffe | caffemodel |
Yes | Hello AI World | GitHub |
ResNet-152 | ILSVRC15 | 224x224 | 1000 | Caffe | caffemodel |
Yes | Hello AI World | GitHub |
VGG-16 | ILSVRC14 | 224x224 | 1000 | Caffe | caffemodel |
Yes | Hello AI World | GitHub |
VGG-19 | ILSVRC14 | 224x224 | 1000 | Caffe | caffemodel |
Yes | Hello AI World | GitHub |
Inception-v4 | ILSVRC12 | 299x299 | 1000 | Caffe | caffemodel |
Yes | Hello AI World | GitHub |
Inception-v4 | ILSVRC12 | 299x299 | 1000 | TensorFlow | TF-TRT (UFF) |
Yes | tf_trt_models | TF-slim |
Mobilenet-v1 | ILSVRC12 | 224x224 | 1000 | TensorFlow | TF-TRT (UFF) |
Yes | tf_trt_models | TF-slim |
Object Detection
Network | Dataset | Resolution | Classes | Framework | Format | TensorRT | Samples | Original |
---|---|---|---|---|---|---|---|---|
SSD-Mobilenet-v1 | COCO | 300x300 | 91 | TensorFlow | UFF |
Yes | Hello AI World TRT_object_detection |
TF Zoo |
SSD-Mobilenet-v2 | COCO | 300x300 | 91 | TensorFlow | UFF |
Yes | TF Zoo | |
SSD-Inception-v2 | COCO | 300x300 | 91 | TensorFlow | UFF |
Yes | TF Zoo | |
YOLO-v2 | COCO | 608x608 | 80 | Darknet | Custom |
Yes | trt-yolo-app | YOLO |
YOLO-v3 | COCO | 608x608 | 80 | Darknet | Custom |
Yes | trt-yolo-app | YOLO |
Tiny YOLO-v3 | COCO | 416x416 | 80 | Darknet | Custom |
Yes | trt-yolo-app | YOLO |
Tiny YOLO-v3 | COCO | 416x416 | 80 | Darknet | Custom |
Yes | trt-yolo-app | YOLO |
Faster-RCNN | Pascal VOC | 500x375 | 21 | Caffe | caffemodel |
Yes | TensorRT sample | GitHub |
Segmentation
Network | Dataset | Resolution | Classes | Framework | Format | TensorRT | Samples | Original |
---|---|---|---|---|---|---|---|---|
FCN-ResNet18 | Cityscapes | 2048x1024 | 21 | PyTorch | ONNX |
Yes | Hello AI World | GitHub |
FCN-ResNet18 | Cityscapes | 1024x512 | 21 | PyTorch | ONNX |
Yes | Hello AI World | GitHub |
FCN-ResNet18 | Cityscapes | 512x256 | 21 | PyTorch | ONNX |
Yes | Hello AI World | GitHub |
FCN-ResNet18 | DeepScene | 864x480 | 5 | PyTorch | ONNX |
Yes | Hello AI World | GitHub |
FCN-ResNet18 | DeepScene | 576x320 | 5 | PyTorch | ONNX |
Yes | Hello AI World | GitHub |
FCN-ResNet18 | DeepScene | 864x480 | 5 | PyTorch | ONNX |
Yes | Hello AI World | GitHub |
FCN-ResNet18 | Multi-Human | 640x360 | 21 | PyTorch | ONNX |
Yes | Hello AI World | GitHub |
FCN-ResNet18 | Multi-Human | 512x320 | 21 | PyTorch | ONNX |
Yes | Hello AI World | GitHub |
FCN-ResNet18 | Pascal VOC | 512x320 | 21 | PyTorch | ONNX |
Yes | Hello AI World | GitHub |
FCN-ResNet18 | Pascal VOC | 320x320 | 21 | PyTorch | ONNX |
Yes | Hello AI World | GitHub |
FCN-ResNet18 | SUN RGB-D | 640x512 | 21 | PyTorch | ONNX |
Yes | Hello AI World | GitHub |
FCN-ResNet18 | SUN RGB-D | 512x400 | 21 | PyTorch | ONNX |
Yes | Hello AI World | GitHub |
FCN-Alexnet | Cityscapes | 2048x1024 | 21 | Caffe | caffemodel |
Yes | Hello AI World | GitHub |
FCN-Alexnet | Cityscapes | 1024x512 | 21 | Caffe | caffemodel |
Yes | Hello AI World | GitHub |
FCN-Alexnet | Pascal VOC | 500x356 | 21 | Caffe | caffemodel |
Yes | Hello AI World | GitHub |
U-Net | Carvana | 512x512 | 1 | TensorFlow | UFF |
Yes | Nano Benchmarks | GitHub |
Computer Vision
OpenCV
- Website: https://opencv.org/
- Source: https://github.com/opencv/opencv
- Version: 3.3.1
- Supports: Jetson Nano / TX1 / TX2 / Xavier
- OpenCV 3.3.1 is included with JetPack, compiled with support for GStreamer. To build a newer version or to enable CUDA support, see these guides:
- nano_build_opencv (GitHub)
- Installing OpenCV 3.4.6
Robotics
ROS
- Website: http://ros.org/
- Source: https://github.com/ros
- Version: ROS Melodic
- Supports: JetPack 4.2.x (Jetson Nano / TX2 / Xavier)
- Installation: http://wiki.ros.org/melodic/Installation/Ubuntu
# enable all Ubuntu packages:
$ sudo apt-add-repository universe
$ sudo apt-add-repository multiverse
$ sudo apt-add-repository restricted
# add ROS repository to apt sources
$ sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
$ sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654
# install ROS Base
$ sudo apt-get update
$ sudo apt-get install ros-melodic-ros-base
# add ROS paths to environment
sudo sh -c 'echo "source /opt/ros/melodic/setup.bash" >> ~/.bashrc'
NVIDIA Isaac SDK
- Website: https://developer.nvidia.com/isaac-sdk
- Version: 2019.2
- Supports: JetPack 4.2.x (Jetson Nano / TX2 / Xavier)
- Downloads: https://developer.nvidia.com/isaac/downloads
- Documentation: https://docs.nvidia.com/isaac
Isaac SIM
- Downloads: https://developer.nvidia.com/isaac/downloads
- Documentation: http://docs.nvidia.com/isaac/isaac_sim/index.html
IoT / Edge
AWS Greengrass
- Website: https://aws.amazon.com/greengrass/
- Source: https://github.com/aws/aws-greengrass-core-sdk-c
- Version: v1.9.1
- Supports: JetPack 4.2.x (Jetson Nano / TX2 / Xavier)
- Forum Thread: https://devtalk.nvidia.com/default/topic/1052324/#5341970
1. Create Greengrass user group:
$ sudo adduser --system ggc_user
$ sudo addgroup --system ggc_group
2. Setup your AWS account and Greengrass group during this page: https://docs.aws.amazon.com/greengrass/latest/developerguide/gg-config.html
After downloading your unique security resource keys to your Jetson that were created in this step, proceed to #3 below.
3. Download the AWS IoT Greengrass Core Software (v1.9.1) for ARMv8 (aarch64):
$ wget https://d1onfpft10uf5o.cloudfront.net/greengrass-core/downloads/1.9.1/greengrass-linux-aarch64-1.9.1.tar.gz
4. Following step #4 from this page, extract Greengrass core and your unique security keys on your Jetson:
$ sudo tar -xzvf greengrass-linux-aarch64-1.9.1.tar.gz -C /
$ sudo tar -xzvf <hash>-setup.tar.gz -C /greengrass # these are the security keys downloaded above
5. Download AWS ATS endpoint root certificate (CA):
$ cd /greengrass/certs/
$ sudo wget -O root.ca.pem https://www.amazontrust.com/repository/AmazonRootCA1.pem
6. Start Greengrass core on your Jetson:
$ cd /greengrass/ggc/core/
$ sudo ./greengrassd start
You should get a message in your terminal Greengrass sucessfully started with PID: xxx
NVIDIA DeepStream
- Website: https://developer.nvidia.com/deepstream-sdk
- Version: 4.0.1
- Supports: JetPack 4.2.2 (Jetson Nano / TX2 / Xavier)
- FAQ: https://developer.nvidia.com/deepstream-faq
- GitHub Samples:
Containers
Docker
- Website: https://docker.com/
- Source: https://github.com/docker
- Version: 18.06
- Support: ≥ JetPack 3.2 (Jetson Nano / TX1 / TX2 / Xavier)
- Installed by default in JetPack-L4T
To enable GPU passthrough, enable access to these device nodes with the --device
flag when launching Docker containers:
/dev/nvhost-ctrl
/dev/nvhost-ctrl-gpu
/dev/nvhost-prof-gpu
/dev/nvmap
/dev/nvhost-gpu
/dev/nvhost-as-gpu
The /usr/lib/aarch64-linux-gnu/tegra
directory also needs mounted.
Below is an example command line for launching Docker with access to the GPU:
docker run --device=/dev/nvhost-ctrl --device=/dev/nvhost-ctrl-gpu --device=/dev/nvhost-prof-gpu --device=/dev/nvmap --device=/dev/nvhost-gpu --device=/dev/nvhost-as-gpu -v /usr/lib/aarch64-linux-gnu/tegra:/usr/lib/aarch64-linux-gnu/tegra <container-name>
To enable IPVLAN for Docker Swarm mode: https://blog.hypriot.com/post/nvidia-jetson-nano-build-kernel-docker-optimized/
Kubernetes
- Website: https://kubernetes.io/
- Source: https://github.com/kubernetes/
- Support: ≥ JetPack 3.2 (Jetson Nano / TX1 / TX2 / Xavier)
- Distributions:
- MicroK8s (v1.14)
$ sudo snap install microk8s --classic
- k3s (v0.5.0)
$ wget https://github.com/rancher/k3s/releases/download/v0.5.0/k3s-arm64
- MicroK8s (v1.14)
To configure L4T kernel for K8S: https://medium.com/@jerry_liang/deploy-gpu-enabled-kubernetes-pod-on-nvidia-jetson-nano-ce738e3bcda9