Experiment Manager
Design and run experiments to train and compare deep learning networks
Since R2020a
Description
You can use the Experiment Manager app to create deep learning experiments to train networks under different training conditions and compare the results. For example, you can use Experiment Manager to:
Sweep through a range of hyperparameter values or use Bayesian optimization to find optimal training options. Bayesian optimization requires Statistics and Machine Learning Toolbox™.
Use the built-in function
trainnet
or define your own custom training function.Compare the results of using different data sets or test different deep network architectures.
To set up your experiment quickly, you can start with a preconfigured template. The experiment templates support workflows that include image classification and regression, sequence classification, audio classification, signal processing, semantic segmentation, and custom training loops.
The Experiment Browser panel displays the hierarchy of experiments and results in a project. The icon next to the experiment name indicates its type.
— Built-in training experiment that uses the
trainnet
training function— Custom training experiment that uses a custom training function
— General-purpose experiment that uses a user-authored experiment function
This page contains information about built-in and custom training experiments for Deep Learning Toolbox™. For general information about using the app, see Experiment Manager. For information about using Experiment Manager with the Classification Learner and Regression Learner apps, see Experiment Manager (Statistics and Machine Learning Toolbox).
More
Required Products
Use Deep Learning Toolbox to run built-in or custom training experiments for deep learning and to view confusion matrices for these experiments.
Use Statistics and Machine Learning Toolbox to run custom training experiments for machine learning and experiments that use Bayesian optimization.
Use Parallel Computing Toolbox™ to run multiple trials at the same time or a single trial on multiple GPUs, on a cluster, or in the cloud. For more information, see Run Experiments in Parallel.
Use MATLAB® Parallel Server™ to offload experiments as batch jobs in a remote cluster. For more information, see Offload Experiments as Batch Jobs to a Cluster.
Open the Experiment Manager App
MATLAB Toolstrip: On the Apps tab, under MATLAB, click the Experiment Manager icon.
MATLAB command prompt: Enter
experimentManager
.
For general information about using the app, see Experiment Manager.
Examples
Quickly Set Up Experiment Using Preconfigured Template
Quickly set up an experiment using a preconfigured experiment template.
Open the Experiment Manager app. In the dialog box, you can create a new project or open an example from the documentation. Under New, select Blank Project.
In the next dialog box, you can open a blank experiment template or one of the preconfigured experiment templates to support your AI workflow. For example, under Image Classification Experiments, select the preconfigured template Image Classification by Sweeping Hyperparameters.
Specify the name and location for the new project. Experiment Manager opens a new experiment in the project.
The experiment is a built-in training experiment that uses the trainnet
training function, indicated by the icon.
The experiment definition tab displays the description, hyperparameters, setup function, post-training custom metrics, and supporting files that define the experiment. You can modify these parameters to quickly set up your experiment, and then run the experiment.
For information about how to run the experiment and compare results after you configure the experiment parameters, see Experiment Manager.
Train Network Using trainnet
and Display Custom Metrics
Set up an experiment that trains using the trainnet
function and exhaustive hyperparameter sweep. Built-in training experiments support
workflows such as image, sequence, time-series, or feature classification and
regression.
Open the Experiment Manager app. In the dialog box, you can create a new project or open an example from the documentation. Under New, select Blank Project.
In the next dialog box, you can open a blank experiment template or one of the
preconfigured experiment templates to support your AI workflow. Under Blank Experiments, select the blank template Built-In Training (trainnet
).
The experiment is a built-in training experiment that uses the
trainnet
training function, indicated by the icon.
The experiment definition tab displays the description, hyperparameters, setup function, post-training custom metrics, and supporting files that define the experiment. When starting with a blank experiment template, you must manually configure these parameters. If you prefer a template with some preconfigured parameters, select one of the preconfigured built-in training templates instead from the Experiment Manager dialog box.
Configure the experiment parameters.
Description — Enter a description of the experiment.
Hyperparameters — Specify the strategy as Exhaustive Sweep to use every combination of the hyperparameter values. Then, define the hyperparameters to use for your experiment.
For example, for Evaluate Deep Learning Experiments by Using Metric Functions, the strategy is
Exhaustive Sweep
and the hyperparameters areInitialLearnRate
andMomentum
.Setup Function — Configure training data, network architecture, loss function, and training options using one of the Setup Function Signatures. The setup function input is a structure with fields from the Hyperparameters table. The output must match the input of the
trainnet
function.For example, for Evaluate Deep Learning Experiments by Using Metric Functions, the setup function accesses the structure of hyperparameters and returns the inputs to the training function. The setup function is defined in a file named
ClassificationExperiment_setup.mlx
.Post-Training Custom Metrics — Compute metrics after each trial to display in the results table. To create the custom metric function, click the Add button in the Post-Training Custom Metrics section. Then, select the metric in the table and click Edit to open and modify the function in the MATLAB Editor. To determine the best combination of hyperparameters for your experiment, inspect the values of these metrics in the results table.
For example, for Evaluate Deep Learning Experiments by Using Metric Functions, the post-training custom metrics are specified by the functions
OnesAsSevens
andSevensAsOnes
. The functions are defined in files namedOnesAsSevens.mlx
andSevensAsOnes.mlx
. The results table displays these metrics.
For information about how to run the experiment and compare results after you configure the experiment parameters, see Experiment Manager.
Optimize Training Using trainnet
and Bayesian Optimization
Set up an experiment that trains using the trainnet
function and Bayesian optimization. Built-in training experiments support workflows such
as image, sequence, time-series, or feature classification and regression.
Open the Experiment Manager app. In the dialog box, you can create a new project or open an example from the documentation. Under New, select Blank Project.
In the next dialog box, you can open a blank experiment template or one of the
preconfigured experiment templates to support your AI workflow. Under Blank Experiments, select the blank template Built-In Training (trainnet
).
The experiment is a built-in training experiment that uses the
trainnet
training function, indicated by the icon.
The experiment definition tab displays the description, hyperparameters, setup function, post-training custom metrics, and supporting files that define the experiment. When starting with a blank experiment template, you must manually configure these parameters. If you prefer a template with some preconfigured parameters, select one of the preconfigured built-in training templates instead from the Experiment Manager dialog box.
Configure the experiment parameters.
Description — Enter a description of the experiment.
Hyperparameters — Specify the strategy as Bayesian Optimization (Statistics and Machine Learning Toolbox). Specify the hyperparameters as two-element vectors that give the lower bound and upper bound or as an array of strings or a cell array of character vectors that list the possible values of the hyperparameter. The experiment optimizes the specified metric and automatically determines the best combination of hyperparameters for your experiment. Then, specify the maximum time, maximum number of trials, and any advanced options for Bayesian optimization.
For example, for Tune Experiment Hyperparameters by Using Bayesian Optimization, the strategy is
Bayesian Optimization
. The hyperparameter names areSectionDepth
,InitialLearnRate
,Momentum
, andL2Regularization
. The maximum number of trials is 30.Setup Function — Configure training data, network architecture, loss function, and training options using one of the Setup Function Signatures. The setup function input is a structure with fields from the Hyperparameters table. The output must match the input of the
trainnet
function.For example, for Tune Experiment Hyperparameters by Using Bayesian Optimization, the setup function accesses the structure of hyperparameters and returns the inputs to the training function. The setup function is defined in a file named
BayesOptExperiment_setup.mlx
.Post-Training Custom Metrics — Choose the optimization direction and a standard training or validation metric (such as accuracy, RMSE, or loss) or a custom metric from the table. The output of a metric function must be a numeric, logical, or string scalar.
For example, for Tune Experiment Hyperparameters by Using Bayesian Optimization, the post-training custom metric is specified by a function
ErrorRate
. The function is defined in a file namedErrorRate.mlx
. The experiment minimizes this metric.
For information about how to run the experiment and compare results after you configure the experiment parameters, see Experiment Manager.
Train Network Using Custom Training Loop and Display Visualization
Set up an experiment that trains using a custom training function and creates custom visualizations.
Custom training experiments support workflows that require a training function other
than trainnet
. These workflows include:
Training a network that is not defined by a layer graph
Training a network using a custom learning rate schedule
Updating the learnable parameters of a network by using a custom function
Training a generative adversarial network (GAN)
Training a twin neural network
Open the Experiment Manager app. In the dialog box, you can create a new project or open an example from the documentation. Under New, select Blank Project.
In the next dialog box, you can open a blank experiment template or one of the preconfigured experiment templates to support your AI workflow. Under Blank Experiments, select the blank template Custom Training.
The experiment is a custom training experiment that uses a custom training function, indicated by the icon.
The experiment definition tab displays the description, hyperparameters, training function, and supporting files that define the experiment. When starting with a blank experiment template, you must manually configure these parameters. If you prefer a template with some preconfigured parameters, select one of the preconfigured custom training templates instead from the Experiment Manager dialog box.
Configure the experiment parameters.
Description — Enter a description of the experiment.
Hyperparameters — Specify the strategy as Exhaustive Sweep or Bayesian Optimization (Statistics and Machine Learning Toolbox), and then define the hyperparameters to use for your experiment. Exhaustive sweep uses every combination of the hyperparameter values, while Bayesian optimization optimizes the specified metric and automatically determines the best combination of hyperparameters for your experiment.
For example, for Run a Custom Training Experiment for Image Comparison, the strategy is
Exhaustive Sweep
and the hyperparameters areWeightsInitializer
andBiasInitializer
.Training Function — Configure training data, network architecture, training procedure, and custom visualizations. Experiment Manager saves the output of this function, so you can export it to the MATLAB workspace when the training is complete. The training function input is a structure with fields from the Hyperparameters table and an
experiments.Monitor
object. Use this object to track the progress of the training, update information fields in the results table, record values of the metrics used by the training, and produce plots.For example, for Run a Custom Training Experiment for Image Comparison, the training function accesses the structure of hyperparameters and returns the a structure that contains the trained network. The training function implements a custom training loop to train a twin neural network, and the function is defined in a file in the project named
ImageComparisonExperiment_training.mlx
.The training function also creates a visualization
Test Images
to display pairs of training images when training is complete.
For information about how to run the experiment and compare results after you configure the experiment parameters, see Experiment Manager.
Run Experiment Trials in Parallel
You can decrease the run time of some experiments if you have Parallel Computing Toolbox or MATLAB Parallel Server.
By default, Experiment Manager runs one trial at a time. If you have Parallel Computing Toolbox, you can run multiple trials at the same time or run a single trial on multiple GPUs, on a cluster, or in the cloud. If you have MATLAB Parallel Server, you can also offload experiments as batch jobs in a remote cluster so that you can continue working or close your MATLAB session while your experiment runs.
In the Experiment Manager toolstrip, in the
Execution section, use the Mode list to
specify an execution mode. If you select the Batch Sequential
or Batch Simultaneous
execution mode, use the
Cluster list and Pool Size field in the
toolstrip to specify your cluster and pool size.
For more information, see Run Experiments in Parallel or Offload Experiments as Batch Jobs to a Cluster.
Related Examples
- Create a Deep Learning Experiment for Classification
- Create a Deep Learning Experiment for Regression
- Evaluate Deep Learning Experiments by Using Metric Functions
- Tune Experiment Hyperparameters by Using Bayesian Optimization
- Use Bayesian Optimization in Custom Training Experiments
- Try Multiple Pretrained Networks for Transfer Learning
- Experiment with Weight Initializers for Transfer Learning
- Audio Transfer Learning Using Experiment Manager
- Choose Training Configurations for LSTM Using Bayesian Optimization
- Run a Custom Training Experiment for Image Comparison
- Use Experiment Manager to Train Generative Adversarial Networks (GANs)
- Custom Training with Multiple GPUs in Experiment Manager
More About
Exhaustive Sweep
To sweep through a range of hyperparameter values, set
Strategy to Exhaustive Sweep
. In the
Hyperparameters table, enter the names and values of the
hyperparameters used in the experiment. Hyperparameter names must start with a letter,
followed by letters, digits, or underscores. Hyperparameter values must be scalars or
vectors with numeric, logical, or string values, or cell arrays of character vectors. For
example, these values are valid hyperparameters:
0.01
0.01:0.01:0.05
[0.01 0.02 0.04 0.08]
["alpha" "beta" "gamma"]
{'delta' 'epsilon' 'zeta'}
Experiment Manager trains the network using every combination of the hyperparameter values specified in the table.
Bayesian Optimization
With Statistics and Machine Learning Toolbox, find optimal training options by using Bayesian optimization. Set
Strategy to Bayesian Optimization
. When
you run the experiment, Experiment Manager searches for the best combination of
hyperparameters. Each trial in the experiment uses a new combination of hyperparameter
values based on the results of the previous trials.
In the Hyperparameters table, specify these properties of the hyperparameters used in the experiment:
Name — Enter a valid hyperparameter name. Hyperparameter names must start with a letter, followed by letters, digits, or underscores.
Range — For a real- or integer-valued hyperparameter, enter a two-element vector that gives the lower bound and upper bound of the hyperparameter. For a categorical hyperparameter, enter an array of strings or a cell array of character vectors that lists the possible values of the hyperparameter.
Type — Select
real
for a real-valued hyperparameter,integer
for an integer-valued hyperparameter, orcategorical
for a categorical hyperparameter.Transform — Select
none
to use no transform orlog
to use a logarithmic transform. When you selectlog
, the hyperparameter values must be positive. With this setting, the Bayesian optimization algorithm models the hyperparameter on a logarithmic scale.
To specify the duration of your experiment, under Bayesian Optimization Options, enter the maximum time in seconds and the maximum number of trials to run. Note that the actual run time and number of trials in your experiment can exceed these settings because Experiment Manager checks these options only when a trial finishes executing.
Optionally, specify deterministic constraints, conditional constraints, and an acquisition function for the Bayesian optimization algorithm (since R2023a). Under Bayesian Optimization Options, click Advanced Options and specify:
Deterministic Constraints — Enter the name of a deterministic constraint function. To run the Bayesian optimization algorithm without deterministic constraints, leave this option blank. For more information, see Deterministic Constraints — XConstraintFcn (Statistics and Machine Learning Toolbox).
Conditional Constraints — Enter the name of a conditional constraint function. To run the Bayesian optimization algorithm without conditional constraints, leave this option blank. For more information, see Conditional Constraints — ConditionalVariableFcn (Statistics and Machine Learning Toolbox).
Acquisition Function Name — Select an acquisition function from the list. The default value for this option is
expected-improvement-plus
. For more information, see Acquisition Function Types (Statistics and Machine Learning Toolbox).
Setup Function Signatures
This table lists the supported signatures for the setup function for a built-in training experiment.
Goal of Experiment | Setup Function Signature |
---|---|
Train a network for image classification and regression tasks using the
images and responses specified by |
function [images,layers,lossFcn,options] = Experiment_setup(params) ... end |
Train a network using the images specified by |
function [images,responses,layers,lossFcn,options] = Experiment_setup(params) ... end |
Train a network for sequence or time-series classification and regression
tasks (for example, an LSTM or GRU network) using the sequences and responses
specified by |
function [sequences,layers,lossFcn,options] = Experiment_setup(params) ... end |
Train a network using the sequences specified by
|
function [sequences,responses,layers,lossFcn,options] = Experiment_setup(params) ... end |
Train a network for feature classification or regression tasks (for
example, a multilayer perceptron, or MLP, network) using the feature data and
responses specified by |
function [features,layers,lossFcn,options] = Experiment_setup(params) ... end |
Train a network using the feature data specified by
|
function [features,responses,layers,lossFcn,options] = Experiment_setup(params) ... end |
Tips
To visualize and build a network, use the Deep Network Designer app.
To reduce the size of your experiments, discard the results and visualizations of any trial that is no longer relevant. In the Actions column of the results table, click the Discard button for the trial.
In your setup function, access the hyperparameter values using dot notation. For more information, see Structure Arrays.
For networks containing batch normalization layers, if the
BatchNormalizationStatistics
training option ispopulation
, Experiment Manager displays final validation metric values that are often different from the validation metrics evaluated during training. The difference in values is the result of additional operations performed after the network finishes training. For more information, see Batch Normalization Layer.
Version History
Introduced in R2020aR2024b: Improvements to experiment setup
These templates now include an initialization function that configures data or other experiment details before initiating the trial runs to reduce trial runtime. These templates also incorporate suggested hyperparameters.
Image Classification by Sweeping Hyperparameters
Image Classification Using Bayesian Optimization
Image Regression by Sweeping Hyperparameters
Image Regression Using Bayesian Optimization
R2024b: Set up signal classification experiment using transfer learning with preconfigured template
If you have Signal Processing Toolbox™, you can set up your built-in experiment for signal classification using transfer learning by selecting a preconfigured template.
R2024a: Set up signal processing experiments with preconfigured templates
If you have Signal Processing Toolbox, you can set up your built-in or custom training experiments for signal classification by selecting a preconfigured template. Using these templates, you can perform:
Signal segmentation
Signal classification
Signal regression
R2023b: App available in MATLAB
You can now use Experiment Manager in MATLAB, with or without Deep Learning Toolbox. When you share your experiments with colleagues who do not have a Deep Learning Toolbox license, they can open your experiments and access your results. Experiment Manager requires:
Deep Learning Toolbox to run built-in or custom training experiments for deep learning and to view confusion matrices for these experiments
Statistics and Machine Learning Toolbox to run custom training experiments for machine learning and experiments that use Bayesian optimization
Parallel Computing Toolbox to run multiple trials at the same time or a single trial at a time on multiple GPUs, on a cluster, or in the cloud
MATLAB Parallel Server to offload experiments as batch jobs in a remote cluster
R2023b: Delete multiple experiments and results
Use the Experiment Browser to delete multiple experiments or multiple results from a project in a single operation. Select the experiments or results you want to delete, then right-click and select Delete. Your selection must contain only experiments or only results. If you delete an experiment, Experiment Manager also deletes the results contained in the experiment.
R2023a: Visualizations for custom training experiments
Display visualizations for your custom training experiments directly in the Experiment Manager app. When the training is complete, the Review Results gallery in the toolstrip displays a button for each figure that you create in your training function. To display a figure in the Visualizations panel, click the corresponding button in the Custom Plot section of the gallery.
R2023a: Debug code before or after running experiment
Diagnose problems in your experiment directly from the Experiment Manager app.
Before running an experiment, you can test your setup and training functions with your choice of hyperparameter values.
After running an experiment, you can debug your setup and training functions using the same random seed and hyperparameters values you used in one of your trials.
For more information, see Debug Deep Learning Experiments.
R2023a: Ease-of-use enhancements
Specify deterministic constraints, conditional constraints, and an acquisition function for experiments that use Bayesian optimization. Under Bayesian Optimization Options, click Advanced Options and specify:
Deterministic Constraints
Conditional Constraints
Acquisition Function Name
Load a project that is already open in MATLAB. When you start the Experiment Manager app, a dialog box prompts you to open the current project in Experiment Manager. Alternatively, in the Experiment Manager app, select New > Project and, in the dialog box, click Project from MATLAB.
If you have Audio Toolbox™, you can set up your built-in or custom training experiments for audio classification by selecting a preconfigured template.
R2022b: Ease-of-use enhancements
In the Experiment Manager toolstrip, the Restart list replaces the Restart All Canceled button. To restart multiple trials of your experiment, open the Restart list, select one or more restarting criteria, and click Restart . The restarting criteria include
All Canceled
,All Stopped
,All Error
, andAll Discarded
.During training, the results table displays the intermediate values for standard training and validation metrics for built-in training experiments. These metrics include loss, accuracy (for classification experiments), and root mean squared error (for regression experiments).
In built-in training experiments, the Execution Environment column of the results table displays whether each trial of a built-in training experiment runs on a single CPU, a single GPU, multiple CPUs, or multiple GPUs.
To discard the training plot, confusion matrix, and training results for trials that are no longer relevant, in the Actions column of the results table, click the Discard button .
R2022a: Experiments as batch jobs in a cluster
If you have Parallel Computing Toolbox and MATLAB Parallel Server, you can send your experiment as a batch job to a remote cluster. If you have only Parallel Computing Toolbox, you can use a local cluster profile to develop and test your experiments on your client machine instead of running them on a network cluster. For more information, see Offload Experiments as Batch Jobs to a Cluster.
R2022a: Ease-of-use enhancements
In the Experiment Manager toolstrip, the Mode list replaces the Use Parallel button.
To run one trial of the experiment at a time, select
Sequential
and click Run.To run multiple trials at the same time, select
Simultaneous
and click Run.To offload the experiment as a batch job, select
Batch Sequential
orBatch Simultaneous
, specify your cluster and pool size, and click Run.
Manage experiments using new Experiment Browser context menu options:
To add a new experiment to a project, right-click the name of the project and select New Experiment.
To create a copy of an experiment, right-click the name of the experiment and select Duplicate.
Specify hyperparameter values as cell arrays of character vectors. In previous releases, Experiment Manager supported only hyperparameter specifications using scalars and vectors with numeric, logical, or string values.
To stop, cancel, or restart a trial, in the Action column of the results table, click the Stop , Cancel , or Restart buttons. In previous releases, these buttons were located in the Progress column. Alternatively, you can right-click the row for the trial and, in the context menu, select Stop, Cancel, or Restart.
When an experiment trial ends, the Status column of the results table displays one of these reasons for stopping:
Max epochs completed
Met validation criterion
Stopped by OutputFcn
Training loss is NaN
To sort annotations by creation time or trial number, in the Annotations panel, use the Sort By list.
After training completes, save the contents of the results table as a
table
array in the MATLAB workspace by selecting Export > Results Table.To export the training information or trained network for a stopped or completed trial, right-click the row for the trial and, in the context menu, select Export Training Information or Export Trained Network.
R2021b: Bayesian optimization in custom training experiments
If you have Statistics and Machine Learning Toolbox, you can use Bayesian optimization to determine the best combination of hyperparameters for a custom training experiment. Previously, custom training experiments supported only sweeping hyperparameters. For more information, see Use Bayesian Optimization in Custom Training Experiments.
R2021b: Experiments in MATLAB Online
Run Experiment Manager in your web browser by using MATLAB Online™. For parallel execution of experiments, you must have access to a Cloud Center cluster.
R2021b: Ease-of-use enhancements
In the Experiment Manager toolstrip, click Cancel to stop an experiment, mark any running trials as
Canceled
, and discard their results. When the training is complete, click Restart All Canceled to restart all the trials that you canceled.Use keyboard shortcuts to navigate Experiment Manager when using a mouse is not an option. For more information, see Keyboard Shortcuts for Experiment Manager.
R2021a: Custom training experiments
Create custom training experiments to support workflows such as:
Using a custom training loop on a
dlnetwork
, such as a twin neural network or a generative adversarial network (GAN)Training a network by using a model function or a custom learning rate schedule
Updating the learnable parameters of a network by using a custom function
R2021a: Ease-of-use enhancements
When you create an experiment, use a preconfigured template as a guide for defining your experiment. Experiment templates support workflows that include image classification, image regression, sequence classification, semantic segmentation, and custom training loops.
Add annotations to record observations about the results of your experiment. Right-click a cell in the results table and select Add Annotation.
R2020b: Bayesian optimization
If you have Statistics and Machine Learning Toolbox, you can use Bayesian optimization to determine the best combination of hyperparameters for an experiment. For more information, see Tune Experiment Hyperparameters by Using Bayesian Optimization.
R2020b: Parallel execution
If you have Parallel Computing Toolbox, you can run multiple trials of an experiment at the same time by clicking Use Parallel and then Run. Experiment Manager starts the parallel pool and executes multiple simultaneous trials. For more information, see Run Experiments in Parallel.
See Also
Apps
- Experiment Manager | Experiment Manager (Statistics and Machine Learning Toolbox) | Deep Network Designer
Functions
Objects
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)