Deep Learning for ARM using Simulink/Embedded Coder

I noticed that the Matlab example shows code generation which takes advantage of the ARM Compute library for deep learning by Simulink/Embedded Coder.
The questions are about
  • what version of the ARM Compute Library is supported or exact versions 19.05 and 20.02.1?
  • Is it dependent on the library version supported by embedded target which is already pre-built by vendor?
  • Is it able to run the models with ARM-NN which utilizes the Compute Library to on-chip execution unit?
  • Does codegen support additional (proprietary) libraries?
  • Can codegen utilize an already available python DNN interpreter or C++ interpreter which is available on-chip?
Thank you.

回答 (1 件)

Nathan Malimban
Nathan Malimban 2021 年 12 月 16 日

0 投票

Hi Peter,
1. For 21b, the supported ARM Compute library versions are 19.02,19.05,20.02.1, and 20.11.
2. Just make sure that the version on the hardware is one of the ones compatible for your MATLAB release. For setting the library up on the hardware, see https://www.mathworks.com/matlabcentral/answers/455590-matlab-coder-how-do-i-build-the-arm-compute-library-for-deep-learning-c-code-generation-and-deplo.=
3. Today, we directly call into ARM-Compute library without using ARM-NN indirection as it does not provide any additional benefits for ARM Cortex A series processors. We’d be interested in learning how ARM-NN improves your deployment workflow, though.
4. For boards with ARM Cortex-M, codegen supports CMSIS-NN starting in 22a. For Intel CPUs, codegen supports MKL-DNN. For NVIDIA GPUs, codegen supports the CuDNN and TensorRT libraries.
5. We are supporting deployment of TFLite models in 22a.

6 件のコメント

Peter Balazovic
Peter Balazovic 2021 年 12 月 17 日
編集済み: Peter Balazovic 2021 年 12 月 17 日
3. Arm NN is an inference engine and it does not perform computations on its own, it delegates computation to a compute engines (e.g. ARM Compute). It provides a bridge between existing NN frameworks and Cortex-A, Mali GPUs or Ethos NPUs. Arm NN does not support for Cortex-M. Arm NN analysis a given model and replaces the operations with implementationn designed for the target HW as result it is a performance boost.
On other hand, ARM Compute are low-level (NN and others) functions optimized for ARM CPU, GPU architectures. This library is designed as a compute engine for the Arm NN, better off using Arm NN.
Are you able to generate code to leverage ARM-NN with the possibility to use different computational backends?
Peter Balazovic
Peter Balazovic 2021 年 12 月 17 日
5. An interpreter is an interface for running TensorFlow Lite models. The TF Lite interpreter runs on-device the model. Are you going to leverage the tflite::Interpreter?
Peter Balazovic
Peter Balazovic 2021 年 12 月 17 日
編集済み: Peter Balazovic 2021 年 12 月 17 日
4. Are you going to support TensorFlow Lite for Microcontrollers and its MicroInterpreter?
TensorFlow Lite for Microcontrollers (TFLM) can deploy TensorFlow Lite models. The TensorFlow Lite library provides an alternative implementation optimized for microcontrollers: TensorFlow Lite for Microcontrollers (TFL-Micro). TFL-Micro contains implementations of operation kernels optimized for Cortex-M using CMSIS-NN library. After converting the model to the TensorFlow Lite format, then is converted into a C language array to include it in the application source code and is interpreted using TFL-Micro MicroInterpreter.
snippet
tflite::MicroInterpreter interpreter(model, microOpResolver, tensorArena, kTensorArenaSize, microErrorReporter);
// Run the inference
interpreter->Invoke();
Nathan Malimban
Nathan Malimban 2021 年 12 月 17 日
編集済み: Nathan Malimban 2021 年 12 月 17 日
4. TFLite Micro will not be supported in 22a, but it's good for us to hear requests so we can plan accordingly for the future. Is your question out of curiosity or would codegen support for TFLite Micro help your workflow?
Nathan Malimban
Nathan Malimban 2021 年 12 月 17 日
5. Yes. In 22a, we will allow you to load a tflite network in MATLAB and generate code. The generated code leverages the tflite intepreter.
Peter Balazovic
Peter Balazovic 2021 年 12 月 17 日
編集済み: Peter Balazovic 2021 年 12 月 17 日
4.
Certainly, it could help with workflow. In this sense, I would assume to have a coder support package for i.MX RT (Cortex-M Processors) parts. Certain i.MX-RT devices have Cadance DSP. The TFL-Micro leverages a DSP-optimized implementation of various NN layers and low-level NN kernels. This DSP library focuses on the speech and audio neural network domain.

サインインしてコメントする。

カテゴリ

ヘルプ センター および File ExchangeDeep Learning Code Generation Fundamentals についてさらに検索

製品

リリース

R2021a

質問済み:

2021 年 12 月 16 日

編集済み:

2021 年 12 月 17 日

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by