このページの内容は最新ではありません。最新版の英語を参照するには、ここをクリックします。
Workflow for Deep Learning C/C++ Code Generation for Simulink Models
With Simulink® Coder™, you can generate code from deep learning neural networks you design and implement using the Deep Learning Toolbox™. To use Simulink Coder to generate code for deep learning networks, you must also install the MATLAB® Coder Interface for Deep Learning. Deep learning uses neural networks to learn useful representations of features directly from data. You can obtain a pretrained neural network or train one yourself using the Deep Learning Toolbox. For more information, see 新しいイメージを分類するためのニューラル ネットワークの再学習 (Deep Learning Toolbox) and 事前学習済みの深層ニューラル ネットワーク (Deep Learning Toolbox).
Implement the trained neural network in Simulink by using blocks from the Deep Neural Networks library or by using a MATLAB Function block. When implementing the trained neural network with a MATLAB Function block, use the coder.loadDeepLearningNetwork
to load a trained deep learning network and use the object functions of the network object to obtain the desired responses. The network must be supported for code generation. See コード生成でサポートされているネットワークとレイヤー.
You can generate C++ code that targets an embedded platform that uses an Intel® processor or an ARM® processor. The generated code calls the Intel Math Kernel Library for Deep Neural Networks (MKL-DNN) or the ARM Compute Library to apply high performance. The hardware and software requirements depend on the target platform. To apply these libraries, in the Model Configuration Parameters dialog box, set these parameter settings.
Pane | Parameter | Setting |
Simulation Target | Language | C++ |
Simulation Target | Target library | MKL-DNN |
Code Generation | Language | C++ |
Interface | Target library |
|
For an example that uses the MKL-DNN library, see 車線検出と車両検出を実行する深層学習 Simulink モデルのコード生成.
You can also generate generic C or C++ code that does not depend on third-party libraries. To generate generic C or C++ code, set these parameter settings.
Pane | Parameter | Setting |
Simulation Target | Language | C or C++ |
Code Generation | Language | C or C++ |
Interface | Target library | None |
For an example, see Sequence-to-Sequence 深層学習 Simulink モデルの汎用 C/C++ の生成.
Deep learning models typically work on large sets of labeled data. Performing inference on these models is computationally intensive, consuming a significant amount of memory. You can use pruning in combination with network quantization to reduce the inference time and memory footprint of the deep learning network, making it easier to deploy to low-power microcontrollers and FPGAs. For more information, see:
深層ニューラル ネットワークの量子化 (Deep Learning Toolbox)
Prune Image Classification Network Using Taylor Scores (Deep Learning Toolbox)
To perform quantization, you must install the Deep Learning Toolbox Model Compression Library support package.