Quantized Deep Neural Network on Jetson AGX Xavier

バージョン 1.0.0 (10.6 MB) 作成者: Kei Otsuka
How to create, train and quantize network, then generate CUDA C++ code for targeting Jetson AGX Xavier
ダウンロード: 233
更新 2020/7/10

Deep Learning is really powerful approach to solve difficult problems(e.g. image classification, segmentation and detection). However, performing inference using deep learning is computationally intensive, consuming significant amount of memory. Even networks that are small in size require a considerable amount of memory and hardware to perform these arithmetic operations. These restrictions can inhibit deployment of deep learning networks to devices that have low computational power and smaller memory resources.

In this case, you can use Deep Learning Toolbox in tandem with the Deep Learning Toolbox Model Quantization Library support package to reduce the memory footprint of a deep neural network by quantizing the weights, biases, and activations of convolution layers to 8-bit scaled integer data types. And then you can use GPU Coder to generate optimized CUDA code for the quantized network.

This example shows how to create, train and quantize a simple convolutional neural network for defect detection, then demonstrate how to generate code for whole algorithms that includes pre/post image processing and convolutional neural network so that you can deploy it into NVIDIA GPUs such as Jetson AGX Xavier, Nano and Drive platforms.

This example demonstrates how to:

1. Load and explore image data
2. Define the network architecture and training options
3. Train the network and classify validation images
4. Quantize network to reduce memory footprint
5. Walk through whole algorithm that consist of pre-processing, CNN and post-processing
6. Generate CUDA C++ code(MEX) for whole algorithm
7. Deploy algorithms to NVIDIA hardware
8. Run the Executable on the Target

[Japanese] 本例題では、物体(六角ナット)上の欠陥を検出するネットワークの構築、ネットワークの量子化と、コード生成によるJetson AGX Xavierへの実装の流れをご紹介します。必要なToolbox、3rd-party ツールがありますので、実行前にPrerequisitesをご覧ください。

引用

Kei Otsuka (2024). Quantized Deep Neural Network on Jetson AGX Xavier (https://github.com/matlab-deep-learning/Quantized-Deep-Neural-Network-on-Jetson-AGX-Xavier/releases/tag/v1.0.0), GitHub. 取得済み .

MATLAB リリースの互換性
作成: R2020a
すべてのリリースと互換性あり
プラットフォームの互換性
Windows macOS Linux
タグ タグを追加

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!
バージョン 公開済み リリース ノート
1.0.0

この GitHub アドオンでの問題を表示または報告するには、GitHub リポジトリにアクセスしてください。
この GitHub アドオンでの問題を表示または報告するには、GitHub リポジトリにアクセスしてください。