Main Content

sequenceUnfoldingLayer

(Not recommended) Sequence unfolding layer

SequenceUnfoldingLayer objects are not recommended. Most neural networks specified as a dlnetwork object do not require sequence folding and unfolding layers. In most cases, deep learning layers have the same behavior when there is no folding or unfolding layer. Otherwise, instead of using a SequenceUnfoldingLayer to manipulate the dimensions of data for downstream layers, define a custom layer functionLayer layer object that operates on the data directly. For more information, see Version History.

Description

A sequence unfolding layer restores the sequence structure of the input data after sequence folding.

To use a sequence unfolding layer, you must connect the miniBatchSize output of the corresponding sequence folding layer to the miniBatchSize input of the sequence unfolding layer.

Creation

Description

layer = sequenceUnfoldingLayer creates a sequence unfolding layer.

example

layer = sequenceUnfoldingLayer('Name',Name) creates a sequence unfolding layer and sets the optional Name property using a name-value pair. For example, sequenceUnfoldingLayer('Name','unfold1') creates a sequence unfolding layer with the name 'unfold1'. Enclose the property name in single quotes.

Properties

expand all

Layer name, specified as a character vector or a string scalar. For Layer array input, the trainNetwork function automatically assigns names to layers with the name "".

The SequenceUnfoldingLayer object stores this property as a character vector.

Data Types: char | string

Number of inputs of the layer.

This layer has two inputs:

  • 'in' – Input feature map.

  • 'miniBatchSize' – Size of the mini-batch from the corresponding sequence folding layer. This output must be connected to the 'miniBatchSize' output of the corresponding sequence folding layer.

Data Types: double

Input names of the layer.

This layer has two inputs:

  • 'in' – Input feature map.

  • 'miniBatchSize' – Size of the mini-batch from the corresponding sequence folding layer. This output must be connected to the 'miniBatchSize' output of the corresponding sequence folding layer.

Data Types: cell

Number of outputs from the layer, returned as 1. This layer has a single output only.

Data Types: double

Output names, returned as {'out'}. This layer has a single output only.

Data Types: cell

Examples

collapse all

Create a sequence unfolding layer.

Create a sequence unfolding layer with the name 'unfold1'.

layer = sequenceUnfoldingLayer('Name','unfold1')
layer = 
  SequenceUnfoldingLayer with properties:

          Name: 'unfold1'
     NumInputs: 2
    InputNames: {'in'  'miniBatchSize'}

Extended Capabilities

C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.

GPU Code Generation
Generate CUDA® code for NVIDIA® GPUs using GPU Coder™.

Version History

Introduced in R2019a

collapse all

R2024a: Not recommended

Starting in R2024a, DAGNetwork and SeriesNetwork objects are not recommended, use dlnetwork objects instead. This recommendation means that the SequenceUnfoldingLayer objects are also not recommended. Most neural networks specified as a dlnetwork object do not require sequence folding and unfolding layers. In most cases, deep learning layers have the same behavior when there is no folding or unfolding layer. Otherwise, instead of using a SequenceUnfoldingLayer to manipulate the dimensions of data for downstream layers, define a custom layer functionLayer layer object that operates on the data directly. For more information about custom layers, see Define Custom Deep Learning Layers.

There are no plans to remove support for DAGNetwork, SeriesNetwork, and SequenceUnfoldingLayer objects. However, dlnetwork objects have these advantages and are recommended instead:

  • dlnetwork objects are a unified data type that supports network building, prediction, built-in training, visualization, compression, verification, and custom training loops.

  • dlnetwork objects support a wider range of network architectures that you can create or import from external platforms.

  • The trainnet function supports dlnetwork objects, which enables you to easily specify loss functions. You can select from built-in loss functions or specify a custom loss function.

  • Training and prediction with dlnetwork objects is typically faster than LayerGraph and trainNetwork workflows.

To convert a trained DAGNetwork or SeriesNetwork object to a dlnetwork object, use the dag2dlnetwork function.