Main Content

Networks and Layers Supported for Code Generation

MATLAB® Coder™ supports code generation for series, directed acyclic graph (DAG), and recurrent convolutional neural networks (CNNs or ConvNets). You can generate code for any trained neural network whose layers are supported for code generation. See Supported Layers.

Supported Pretrained Networks

The following pretrained networks, available in Deep Learning Toolbox™, are supported for code generation. Support can have limitations, for more information, see the extended capabilities section on the reference page.

Network NameDescriptionIntel® MKL-DNNARM® Compute Library
AlexNet

AlexNet convolutional neural network. For the pretrained AlexNet model, see alexnet (Deep Learning Toolbox).

YesYes
DarkNetDarkNet-19 and DarkNet-53 convolutional neural networks. For the pretrained DarkNet models, see darknet19 (Deep Learning Toolbox) and darknet53 (Deep Learning Toolbox).YesYes
DenseNet-201

DenseNet-201 convolutional neural network. For the pretrained DenseNet-201 model, see densenet201 (Deep Learning Toolbox).

YesYes
EfficientNet-b0

EfficientNet-b0 convolutional neural network. For the pretrained EfficientNet-b0 model, see efficientnetb0 (Deep Learning Toolbox).

YesYes
GoogLeNet

GoogLeNet convolutional neural network. For the pretrained GoogLeNet model, see googlenet (Deep Learning Toolbox).

YesYes
Inception-ResNet-v2

Inception-ResNet-v2 convolutional neural network. For the pretrained Inception-ResNet-v2 model, see inceptionresnetv2 (Deep Learning Toolbox).

YesYes
Inception-v3Inception-v3 convolutional neural network. For the pretrained Inception-v3 model, see inceptionv3 (Deep Learning Toolbox).YesYes
MobileNet-v2

MobileNet-v2 convolutional neural network. For the pretrained MobileNet-v2 model, see mobilenetv2 (Deep Learning Toolbox).

YesYes
NASNet-Large

NASNet-Large convolutional neural network. For the pretrained NASNet-Large model, see nasnetlarge (Deep Learning Toolbox).

YesYes
NASNet-Mobile

NASNet-Mobile convolutional neural network. For the pretrained NASNet-Mobile model, see nasnetmobile (Deep Learning Toolbox).

YesYes
ResNet

ResNet-18, ResNet-50, and ResNet-101 convolutional neural networks. For the pretrained ResNet models, see resnet18 (Deep Learning Toolbox), resnet50 (Deep Learning Toolbox), and resnet101 (Deep Learning Toolbox).

YesYes
SegNet

Multi-class pixelwise segmentation network. For more information, see segnetLayers (Computer Vision Toolbox).

YesNo
SqueezeNet

Small, deep neural network. For the pretrained SqeezeNet models, see squeezenet (Deep Learning Toolbox).

YesYes
VGG-16

VGG-16 convolutional neural network. For the pretrained VGG-16 model, see vgg16 (Deep Learning Toolbox).

YesYes
VGG-19

VGG-19 convolutional neural network. For the pretrained VGG-19 model, see vgg19 (Deep Learning Toolbox).

YesYes
Xception

Xception convolutional neural network. For the pretrained Xception model, see xception (Deep Learning Toolbox).

YesYes

Supported Layers

The following layers are supported for code generation by MATLAB Coder for the target deep learning libraries specified in the table.

Once you install the support package MATLAB Coder Interface for Deep Learning, you can use analyzeNetworkForCodegen to see if a network is compatible for code generation for a specific deep learning library. For example:

result = analyzeNetworkForCodegen(mobilenetv2,TargetLibrary = 'mkldnn')

Note

Starting in R2022b, check the code generation compatibility of a deep learning network by using the analyzeNetworkForCodegen function. coder.getDeepLearningLayers is not recommended.

Layer NameDescriptionGeneric C/C++Intel MKL-DNNARM Compute Library
additionLayer (Deep Learning Toolbox)

Addition layer

YesYesYes
anchorBoxLayer (Computer Vision Toolbox)

Anchor box layer

Yes

YesYes
attentionLayer (Deep Learning Toolbox)

Dot-product attention layer

  • Code generation is not supported when HasScoresOutput is set to true.

Yes  
averagePooling1dLayer (Deep Learning Toolbox)1-D average pooling layerYes  
averagePooling2dLayer (Deep Learning Toolbox)

Average pooling layer

  • You can generate C/C++ code using 'mean' setting for PaddingValue property.

  • For Simulink® models that implement deep learning functionality using MATLAB Function block, simulation errors out if the network contains an average pooling layer with non-zero padding value. In such cases, use the blocks from the Deep Neural Networks library instead of a MATLAB Function to implement the deep learning functionality.

Yes

YesYes
batchNormalizationLayer (Deep Learning Toolbox)

Batch normalization layer

YesYesYes
bilstmLayer (Deep Learning Toolbox)Bidirectional LSTM layerYesYesYes
classificationLayer (Deep Learning Toolbox)

Create classification output layer

YesYesYes
clippedReluLayer (Deep Learning Toolbox)

Clipped Rectified Linear Unit (ReLU) layer

YesYesYes
concatenationLayer (Deep Learning Toolbox)

Concatenation layer

YesYesYes
convolution1dLayer (Deep Learning Toolbox)1-D convolutional layerYes  
convolution2dLayer (Deep Learning Toolbox)

2-D convolution layer

  • For code generation, the PaddingValue parameter must be equal to 0, which is the default value.

Yes

Yes

Yes
crop2dLayer (Deep Learning Toolbox)

Layer that applies 2-D cropping to the input

NoYesYes
CrossChannelNormalizationLayer (Deep Learning Toolbox)

Channel-wise local response normalization layer

NoYesYes

Custom layers

Custom layers, with or without learnable parameters, that you define for your problem.

See:

The outputs of the custom layer must be fixed-size arrays.

Custom layers in sequence networks are supported for generic C/C++ code generation only.

For code generation, custom layers must contain the %#codegen pragma.

You can pass dlarray to custom layers if:

  • The custom layer is in dlnetwork.

  • Custom layer is in a DAG or series network and either inherits from nnet.layer.Formattable or has no backward propagation.

For unsupported dlarray methods, then you must extract the underlying data from the dlarray, perform the computations and reconstruct the data back into the dlarray for code generation. For example,

function Z = predict(layer, X)

if coder.target('MATLAB')
   Z = doPredict(X);
else
   if isdlarray(X)
      X1 = extractdata(X);
      Z1 = doPredict(X1);
      Z = dlarray(Z1);
  else
      Z = doPredict(X);
  end
end

end

Yes

Custom layers in sequence networks are supported for generic C/C++ code generation only.

Yes

Yes

Custom output layers

All output layers including custom classification or regression output layers created by using nnet.layer.ClassificationLayer or nnet.layer.RegressionLayer.

For an example showing how to define a custom classification output layer and specify a loss function, see Define Custom Classification Output Layer (Deep Learning Toolbox).

For an example showing how to define a custom regression output layer and specify a loss function, see Define Custom Regression Output Layer (Deep Learning Toolbox).

Yes

Yes

Yes

depthConcatenationLayer (Deep Learning Toolbox)

Depth concatenation layer

Yes

Yes

Yes
depthToSpace2dLayer (Image Processing Toolbox)2-D depth to space layerYesYesYes
dicePixelClassificationLayer (Computer Vision Toolbox)

A Dice pixel classification layer provides a categorical label for each image pixel or voxel using generalized Dice loss.

NoYesYes
dropoutLayer (Deep Learning Toolbox)

Dropout layer

YesYesYes
eluLayer (Deep Learning Toolbox)

Exponential linear unit (ELU) layer

YesYesYes
embeddingConcatenationLayer (Deep Learning Toolbox)Embedding concatenation layerYes  
featureInputLayer (Deep Learning Toolbox)

Feature input layer

YesYesYes
flattenLayer (Deep Learning Toolbox)

Flatten layer

Yes

YesYes
focalLossLayer (Computer Vision Toolbox)A focal loss layer predicts object classes using focal loss.

Yes

YesYes
fullyConnectedLayer (Deep Learning Toolbox)

Fully connected layer

YesYesYes
geluLayer (Deep Learning Toolbox)

Gaussian error linear unit (GELU) layer

YesYesYes
globalAveragePooling1dLayer (Deep Learning Toolbox)1-D global average pooling layerYes  
globalAveragePooling2dLayer (Deep Learning Toolbox)

Global average pooling layer for spatial data

Yes

Yes

Yes

globalMaxPooling1dLayer (Deep Learning Toolbox)1-D global max pooling layerYes  
globalMaxPooling2dLayer (Deep Learning Toolbox)

2-D global max pooling layer

Yes

YesYes

groupedConvolution2dLayer (Deep Learning Toolbox)

2-D grouped convolutional layer

  • For code generation, the PaddingValue parameter must be equal to 0, which is the default value.

No

Yes

Yes

  • If you specify an integer for numGroups, then the value must be less than or equal to 2.

groupNormalizationLayer (Deep Learning Toolbox)

Group normalization layer

Yes

Yes

Yes

gruLayer (Deep Learning Toolbox)

Gated recurrent unit (GRU) layer

Yes

Yes

Yes

gruProjectedLayer (Deep Learning Toolbox)

GRU projected layer

YesNoNo
imageInputLayer (Deep Learning Toolbox)

Image input layer

  • Code generation does not support 'Normalization' specified using a function handle.

YesYesYes
indexing1dLayer (Deep Learning Toolbox)

1-D indexing layer

Yes  
layerNormalizationLayer (Deep Learning Toolbox)

Layer normalization layer

YesYesYes
leakyReluLayer (Deep Learning Toolbox)

Leaky Rectified Linear Unit (ReLU) layer

YesYesYes
lstmLayer (Deep Learning Toolbox)

Long short-term memory (LSTM) layer

YesYesYes
lstmProjectedLayer (Deep Learning Toolbox)

LSTM projected layer

YesNoNo
maxPooling1dLayer (Deep Learning Toolbox)1-D max pooling layerYes  
maxPooling2dLayer (Deep Learning Toolbox)

Max pooling layer

If equal max values exists along the off-diagonal in a kernel window, implementation differences for the maxPooling2dLayer might cause minor numerical mismatch between MATLAB and the generated code. This issue also causes mismatch in the indices of the maximum value in each pooled region. For more information, see maxPooling2dLayer (Deep Learning Toolbox).

YesYesYes
maxUnpooling2dLayer (Deep Learning Toolbox)

Max unpooling layer

If equal max values exists along the off-diagonal in a kernel window, implementation differences for the maxPooling2dLayer might cause minor numerical mismatch between MATLAB and the generated code. This issue also causes mismatch in the indices of the maximum value in each pooled region. For more information, see maxUnpooling2dLayer (Deep Learning Toolbox).

NoYesNo
multiplicationLayer (Deep Learning Toolbox)

Multiplication layer

YesYesYes
patchEmbeddingLayer (Computer Vision Toolbox)

Patch embedding layer

  • Code generation supports only 1-D and 2-D spatial data. 3-D spatial or more than 3-D spatial data format such as "SSS" or "SSSS" is not supported.

Yes  
pixelClassificationLayer (Computer Vision Toolbox)

Create pixel classification layer for semantic segmentation

NoYesYes
positionEmbeddingLayer (Deep Learning Toolbox)Maps sequential or spatial indices to vectors.Yes  
rcnnBoxRegressionLayer (Computer Vision Toolbox)

Box regression layer for Fast and Faster R-CNN

Yes

YesYes
rpnClassificationLayer (Computer Vision Toolbox)

Classification layer for region proposal networks (RPNs)

NoYesYes
regressionLayer (Deep Learning Toolbox)

Create a regression output layer

YesYesYes
reluLayer (Deep Learning Toolbox)

Rectified Linear Unit (ReLU) layer

YesYesYes
resize2dLayer (Image Processing Toolbox)2-D resize layerYesYesYes
scalingLayer (Reinforcement Learning Toolbox)Scaling layer for actor or critic networkYesYesYes
selfAttentionLayer (Deep Learning Toolbox)

Self-attention layer

  • Code generation is not supported when HasScoresOutput is set to true.

Yes  
sigmoidLayer (Deep Learning Toolbox)Sigmoid layerYesYesYes
sequenceFoldingLayer (Deep Learning Toolbox)Sequence folding layerNoYesYes
sequenceInputLayer (Deep Learning Toolbox)

Sequence input layer

  • For vector sequence inputs, the number of features must be a constant during code generation.

  • For code generation, the input data must contain either zero or two spatial dimensions.

  • Code generation does not support 'Normalization' specified using a function handle.

YesYesYes
sequenceUnfoldingLayer (Deep Learning Toolbox)Sequence unfolding layerNoYesYes
softmaxLayer (Deep Learning Toolbox)

Softmax layer

Yes

Yes

Yes
softplusLayer (Reinforcement Learning Toolbox)

Softplus layer for actor or critic network

YesYesYes
spaceToDepthLayer (Image Processing Toolbox)

Space to depth layer

NoYesYes
ssdMergeLayer (Computer Vision Toolbox)

SSD merge layer for object detection

Yes

YesYes
swishLayer (Deep Learning Toolbox)

Swish layer

YesYesYes

nnet.keras.layer.ClipLayer

Clips the input between the upper and lower bounds

YesYesYes

nnet.keras.layer.FlattenCStyleLayer

Flattens activations into 1-D assuming C-style (row-major) order

Yes

Yes

Yes

nnet.keras.layer.GlobalAveragePooling2dLayer

Global average pooling layer for spatial data

Yes

Yes

Yes

nnet.keras.layer.PreluLayer

Parametric rectified linear unit

YesYesYes

nnet.keras.layer.SigmoidLayer

Sigmoid activation layer

Yes

Yes

Yes

nnet.keras.layer.TanhLayer

Hyperbolic tangent activation layer

Yes

Yes

Yes

nnet.keras.layer.TimeDistributedFlattenCStyleLayer

Flatten a sequence of input image into a sequence of vector, assuming C-style (or row-major) storage ordering of the input layer

YesYesYes

nnet.keras.layer.ZeroPadding2dLayer

Zero padding layer for 2-D input

Yes

Yes

Yes

nnet.onnx.layer.ClipLayer

Clips the input between the upper and lower bounds

YesYesYes
nnet.onnx.layer.ElementwiseAffineLayer

Layer that performs element-wise scaling of the input followed by an addition

YesYesYes

nnet.onnx.layer.FlattenInto2dLayer

Flattens a MATLAB 2D image batch in the way ONNX does, producing a 2D output array with CB format

YesYesYes

nnet.onnx.layer.FlattenLayer

Flatten layer for ONNX™ network

Yes

Yes

Yes

nnet.onnx.layer.GlobalAveragePooling2dLayer

Global average pooling layer for spatial data

YesYesYes

nnet.onnx.layer.IdentityLayer

Layer that implements ONNX identity operator

Yes

Yes

Yes

nnet.onnx.layer.PreluLayer

Parametric rectified linear unit

YesYesYes

nnet.onnx.layer.SigmoidLayer

Sigmoid activation layer

YesYesYes

nnet.onnx.layer.TanhLayer

Hyperbolic tangent activation layer

YesYesYes

nnet.onnx.layer.VerifyBatchSizeLayer

Verify fixed batch size

YesYesYes

tanhLayer (Deep Learning Toolbox)

Hyperbolic tangent (tanh) layer

Yes

Yes

Yes

transposedConv2dLayer (Deep Learning Toolbox)

Transposed 2-D convolution layer

Code generation does not support asymmetric cropping of the input. For example, specifying a vector [t b l r] for the 'Cropping' parameter to crop the top, bottom, left, and right of the input is not supported.

No

Yes

Yes

wordEmbeddingLayer (Text Analytics Toolbox)

A word embedding layer maps word indices to vectors

  • The property OOVMode is always set to "map-to-last" when generating code that depends on third-party deep learning libraries.

  • The property OOVMode reverts to "map-to-last" if the runtime check is disabled in configuration setting. To enable the runtime check, set RuntimeChecks to true for generating standalone C/C++ code, or set IntegrityChecks to true for generating MEX code. For more information, see coder.config and coder.MexCodeConfig.

Yes

Yes

Yes

yolov2OutputLayer (Computer Vision Toolbox)

Output layer for YOLO v2 object detection network

No

Yes

Yes

yolov2TransformLayer (Computer Vision Toolbox)

Transform layer for YOLO v2 object detection network

No

Yes

Yes

Supported Classes

Class

Description

Generic C/C++

Intel MKL-DNN

ARM Compute Library

DAGNetwork (Deep Learning Toolbox)

Directed acyclic graph (DAG) network for deep learning

  • Only the activations, predict, and classify methods are supported.

Yes

Yes

Yes

dlnetwork (Deep Learning Toolbox)

Deep learning network for custom training loops

  • Code generation supports only the InputNames and OutputNames properties.

  • The Initialized property of the dlnetwork object must be 1 (true).

  • Code generation supports tuning the variable Value of the State property. Code generation does not support modifying variables Layer and Parameter of the State property.

  • Code generation supports these functions for the State property:

  • For Simulink simulation, code generation does not support extracting and updating the State of a dlnetwork in a MATLAB Function Block. Instead, use a Stateful Predict (Deep Learning Toolbox) or a Stateful Classify (Deep Learning Toolbox) block.

  • Code generation supports multi-input multi-output dlnetwork objects with heterogeneous input layers.

    • For recurrent neural networks, multiple inputs are not supported.

    • For Intel MKL-DNN, all input layers must be sequence input layers.

    • For ARM Compute, the dlnetwork can have sequence and non-sequence input layers.

  • Code generation supports dlarray inputs with these limitations for data formats.

    • For generic C/C++ that does not depend on any third-party libraries, the dlnetwork can have input layers with any number of spatial dimensions.

    • For ARM Compute and Intel MKL-DNN, code generation supports dlarray objects containing zero or two spatial dimensions. For example, code generation supports

      • Vector sequences that have "CT" or "CBT" data formats

      • Image sequences that have "SSCT" or "SSCBT" data formats

    • For code generation, only the "T" (time) dimension can be variable-sized, all other dimensions must have fixed sizes.

  • Code generation supports only the predict object function. The dlarray input to the predict method must be a single datatype.

  • To create a dlnetwork object for code generation, see Load Pretrained Networks for Code Generation.

Yes

Yes

Yes

SeriesNetwork (Deep Learning Toolbox)

Series network for deep learning

  • Only the activations, classify, predict, predictAndUpdateState, classifyAndUpdateState, and resetState object functions are supported.

Yes

Yes

Yes

yolov2ObjectDetector (Computer Vision Toolbox)

Detect objects using YOLO v2 object detector

  • Only the detect (Computer Vision Toolbox) method of the yolov2ObjectDetector is supported for code generation.

  • The roi argument to the detect method must be a code generation constant (coder.const()) and a 1x4 vector.

  • Only the Threshold, SelectStrongest, MinSize, and MaxSize name-value pairs for detect are supported.

No

Yes

Yes

yolov3ObjectDetector (Computer Vision Toolbox)

Detect objects using YOLO v3 object detector

  • Only the detect (Computer Vision Toolbox) method of the yolov3ObjectDetector is supported for code generation.

  • The roi argument to the detect method must be a code generation constant (coder.const()) and a 1x4 vector.

  • Only the Threshold, SelectStrongest, MinSize, and MaxSize name-value pairs for detect are supported.

Yes

Yes

Yes

yolov4ObjectDetector (Computer Vision Toolbox)

Detect objects using YOLO v4 object detector

  • The roi argument to the detect method must be a code generation constant (coder.const()) and a 1x4 vector.

  • Only the Threshold, SelectStrongest, MinSize, MaxSize, and MiniBatchSize name-value pairs for detect are supported.

Yes

Yes

Yes

yoloxObjectDetector (Computer Vision Toolbox)

Detect objects using YOLOX object detector

  • To prepare a yoloxObjectDetector object for GPU code generation, use vision.loadYOLOXObjectDetector (Computer Vision Toolbox).

  • The roi argument to the detect method must be a code generation constant (coder.const()) and a 1x4 vector.

  • The AutoResize argument to the detect method must be a code generation constant (coder.const()).

  • Only the Threshold, SelectStrongest, MinSize, MaxSize, and MiniBatchSize, and AutoResize name-value pairs for detect are supported.

No

Yes

Yes

ssdObjectDetector (Computer Vision Toolbox)

Object to detect objects using the SSD-based detector.

  • Only the detect (Computer Vision Toolbox) method of the ssdObjectDetector is supported for code generation.

  • The roi argument to the detect method must be a codegen constant (coder.const()) and a 1x4 vector.

  • Only the Threshold, SelectStrongest, MinSize, MaxSize, and MiniBatchSize Name-Value pairs are supported. All Name-Value pairs must be compile-time constants.

  • The channel and batch size of the input image must be fixed size.

  • The labels output is returned as a categorical array.

  • In the generated code, the input is rescaled to the size of the input layer of the network. But the bounding box that the detect method returns is in reference to the original input size.

Yes

Yes

Yes

pointPillarsObjectDetector (Lidar Toolbox)

PointPillars network to detect objects in lidar point clouds

  • Only thedetect method of the pointPillarsObjectDetector is supported for code generation.

  • Only the Threshold, SelectStrongest, and MiniBatchSize Name-Value pairs of the detect method are supported.

No

Yes

Yes

int8 Code Generation

You can use Deep Learning Toolbox in tandem with the Deep Learning Toolbox Model Quantization Library support package to reduce the memory footprint of a deep neural network by quantizing the weights, biases, and activations of convolution layers to 8-bit scaled integer data types. Then, you can use MATLAB Coder to generate optimized code for the network. See Generate int8 Code for Deep Learning Networks.

Related Topics