Main Content

Specify Test Properties in the Test Manager

The Test Manager has property settings that specify how test cases, test suites, and test files run. To open the Test Manager, use sltest.testmanager.view. For information about the Test Manager, see Test Manager

Test Case, Test Suite, and Test File Sections Summary

When you open a test case, test suite, or test file in the Test Manager, the test settings are grouped into sections. Test cases, test suites, and test files have different sections and settings. Click a test case, test suite, or test file in the Test Browser pane to see its settings.

If you do not want to see all of the available test sections, you can use the Test Manager preferences to hide sections:

  1. In the Test Manager toolstrip, click Preferences.

  2. Select the Test File, Test Suite, or Test Case tab.

  3. Select the sections to show, or clear the sections to hide. To show only the sections in which you have already set or changed settings, clear all selections in the Preferences dialog box.

  4. Click OK.

Sections that you already modified appear in the Test Manager, regardless of the preference setting.

To set these properties programmatically, see sltest.testmanager.getpref and sltest.testmanager.setpref.

Tags

Tag your test file, test suite, or test case with categorizations, such as safety, logged-data, or burn-in. Filter tests using these tags when executing tests or viewing results. See Filter Test Execution and Results.

For the corresponding API, see the Tags property of sltest.testmanager.TestFile, sltest.testmanager.TestSuite, or sltest.testmanager.TestCase, respectively.

Description

Add descriptive text to your test case, test suite, or test file.

For the corresponding API, see the Description property of sltest.testmanager.TestFile, sltest.testmanager.TestSuite, or sltest.testmanager.TestCase, respectively.

Requirements

If you have Simulink® Requirements™ installed, you can establish traceability by linking your test file, test suite, or test case to requirements. For more information, see Link to Test Cases from Requirements (Simulink Requirements).

To link a test case, test suite, or test file to a requirement:

  1. Open the Requirements Editor. In the Simulink Toolstrip, on the Apps tab, under Model Verification, Validation, and Test, click Requirements Editor.

  2. Highlight a requirement.

  3. In the Test Manager, in the Requirements section, click the arrow next to the Add button and select Link to Selected Requirement.

  4. The requirement link appears in the Requirements list.

For the corresponding API, see the Requirements property of sltest.testmanager.TestFile, sltest.testmanager.TestSuite, or sltest.testmanager.TestCase, respectively.

System Under Test

Specify the model you want to test in the System Under Test section. To use an open model in the currently active Simulink window, click the Use current model button .

Note

The model must be available on the path to run the test case. You can add the folder that contains the model to the path using the preload callback. See Callbacks.

Specifying a new model in the System Under Test section can cause the model information to be out of date. To update the model test harnesses, Signal Editor scenarios, and available configuration sets, click the Refresh button .

For the corresponding API, see the Model name-argument pair of setProperty.

Test Harness

If you have a test harness in your system under test, then you can select the test harness to use for the test case. If you have added or removed test harnesses in the model, click the Refresh button to view the updated test harness list.

For more information about using test harnesses, see Refine, Test, and Debug a Subsystem.

For the corresponding API, see the HarnessName name-argument pair of setProperty.

Simulation Settings and Release Overrides

To override the Simulation Mode of the model settings, select a new mode from the list. If the model contains SIL/PIL blocks and you need to run in Normal mode, enable Override model blocks in SIL/PIL mode to normal mode. For the corresponding API, see the OverrideSILPILMode name-argument pair of setProperty.

You can simulate the model and run tests in more than one MATLAB® release that is installed on your system. Use Select releases for simulation to select available releases. You can use releases from R2011b forward.

To add one or more releases so they are available in the Test Manager, click Add releases in Select releases for simulation to open the Release pane in the Test Manager Preferences dialog box. Navigate to the location of the MATLAB installation you want to add, and click OK.

You can add releases to the list and delete them. You cannot delete the release in which you started the MATLAB session.

For more information, see Run Tests in Multiple Releases of MATLAB. For the corresponding API, see the Release name-argument pair of setProperty.

System Under Test Considerations

  • The System Under Test cannot be in fast restart or external mode.

  • To stop a test running in Rapid Accelerator mode, press Ctrl+C at the MATLAB command prompt.

  • When running parallel execution in rapid accelerator mode, streamed signals do not show up in the Test Manager.

  • The System Under Test cannot be a protected model.

Simulation 1 and Simulation 2

These sections appear in equivalence test cases. Use them to specify the details about the simulations that you want to compare. Enter the system under test, the test harness if applicable, and simulation setting overrides under Simulation 1. You can then click Copy settings from Simulation 1 under Simulation 2 to use a starting point for your second set of simulation settings.

For the test to pass, Simulation 1 and Simulation 2 must log the same signals.

Use these sections with the Equivalence Criteria section to define the premise of your test case. For an example of an equivalence test, see Test Two Simulations for Equivalence.

For the corresponding API, see the SimulationIndex name-argument pair of setProperty.

Parameter Overrides

Specify parameter values in the test case to override the parameter values in the model workspace, data dictionary, base workspace, or in a model reference hierarchy. Parameters are grouped into sets. You can turn parameter sets and individual parameter overrides on or off by using the check box next to the set or parameter.

To add a parameter override:

  1. Click Add.

    A dialog box opens with a list of parameters. If the list of parameters is not current, click the Refresh button in the dialog box.

  2. Select the parameter you want to override.

  3. To add the parameter to the parameter set, click OK.

  4. Enter the override value in the parameter Override Value column.

To restore the default value of a parameter, clear the value in the Override Value column and press Enter.

You can also add a set of parameter overrides from a MAT-file, including MAT-files generated by Simulink Design Verifier™. Click the Add arrow and select Add File to create a parameter set from a MAT-file.

For an example that uses parameter overrides, see Override Model Parameters in a Test Case.

For the corresponding APIs, see the sltest.testmanager.ParameterOverride class, and the OverrideStartTime, OverrideStopTIme, OverrideInitialState, OverrideModelOutputSettings, and ConfigSetOverrideSetting name-argument pairs of the setProperty method.

Parameter Overrides Considerations

The Test Manager displays only top-level system parameters from the system under test.

Callbacks

Test-File Level Callbacks

Two callback scripts are available in each test file that execute at different times during a test:

  • Setup runs before test file executes.

  • Cleanup runs after test file executes.

For the corresponding test case APIs, see the PreloadCallback, PostloadCallback, CleanupCallback, and PreStartRealTimeApplicationCallback name-argument pairs of the TestCase setProperty method.

For the corresponding test file APIs, see the SetupCallback and CleanupCallback name-argument pairs of the test file TestFile setProperty method.

Test-Suite Level Callbacks

Two callback scripts are available in each test suite that execute at different times during a test:

  • Setup runs before the test suite executes.

  • Cleanup runs after the test suite executes.

If a test suite does not have anytest cases, the test suite callbacks do not execute.

For the corresponding APIs, see the SetupCallback and CleanupCallback name-argument pairs of the TestSuite setProperty method.

Test-Case Level Callbacks

Three callback scripts are available in each test case that execute at different times during a test:

  • Pre-load runs before the model loads and before the model callbacks.

  • Post-load runs after the model loads and the PostLoadFcn model callback.

  • Cleanup runs after simulations and model callbacks.

See Test Execution Order for information about the order in which callbacks occur and models load and simulate.

To run a single callback script, click the Run button above the corresponding script.

You can use predefined variables in the test case callbacks:

  • sltest_bdroot available in Post-Load: The model simulated by the test case. The model can be a harness model.

  • sltest_sut available in Post-Load: The system under test. For a harness, it is the component under test.

  • sltest_isharness available in Post-Load: Returns true if sltest_bdroot is a harness model.

  • sltest_simout available in Cleanup: Simulation output produced by simulation.

  • sltest_iterationName available in Pre-Load, Post-Load, and Cleanup: Name of the currently executing test iteration.

disp and fprintf do not work in callbacks. To verify that the callbacks are executed, use a MATLAB script that includes breakpoints in the callbacks.

The test case callback scripts are not stored with the model and do not override Simulink model callbacks. Consider the following when using callbacks:

  • To stop execution of an infinite loop from a callback script, press Ctrl+C at the MATLAB command prompt.

  • sltest.testmanager functions are not supported.

For the corresponding APIs, see the PreloadCallback, PostloadCallback, CleanupCallback, and PreStartRealTimeApplicationCallback name-argument pairs of the TestCase setProperty method.

Assessment Callback

You can enter a callback to define variables and conditions used only in logical and temporal assessments by using the Assessment Callback section. See Assessment Callback in the Logical and Temporal Assessments section for more information.

For the corresponding API, see setAssessmentsCallback.

Inputs

A test case can use input data from:

  • A Signal Editor block in the system under test. Select Signal Editor scenario and select the scenario. The system under test can have only one Signal Editor block at the top level.

  • An external data file. In the External Inputs table, click Add. Select a MAT-file or Microsoft® Excel® file.

    For more information on using external files as inputs, see Use External Excel or MAT-File Data in Test Cases. For information about the file format for Microsoft Excel files in Test Manager, see Format Test Case Data in Excel.

  • Scenarios in a Test Sequence block. First, click the refresh arrow next to the Test Sequence Block field, then select the Test Sequence block in the model that contains the scenarios. If you do not also select a scenario from Override with Scenario and do not use iterations, then the test runs the active scenario in the selected Test Sequence block. If you do not also select a scenario, but do use iterations, then the active scenario in the Test Sequence block is the default for all the iterations.

    Use Override with Scenario to override the active scenario in the selected Test Sequence block. Click the refresh arrow next to the Override with Scenario field. Then, select the scenario to use instead of the active scenario or as the default for the iterations. In the Iterations section, you can change the scenario assigned to each iteration. For more information, see Use Test Sequence Scenarios in the Test Sequence Editor and Test Manager.

To include the input data in your test results set, select Include input data in test result.

If the time interval of your input data is shorter than the model simulation time, you can limit the simulation to the time specified by your input data by selecting Stop simulation at last time point.

For more information on test inputs, see the Test Authoring: Inputs page.

Edit Input Data Files in Test Manager

From the Test Manager, you can edit your input data files.

To edit a file, select the file and click Edit. You can then edit the data in the signal editor for MAT-files or Microsoft Excel for Excel files.

To learn about the syntax for Excel files, see Format Test Case Data in Excel.

For the corresponding API, see sltest.testmanager.TestInput.

Simulation Outputs

Use the Simulation Outputs section to add signal outputs to your test results. Signals logged in your model or test harness can appear in the results after you add them as simulation outputs. You can then plot them. Add individual signals to log and plot or add a signal set.

Under Simulation Outputs, click Add. Follow the user interface.

Use the options in the Other Outputs subsection to add states, final states, model output values, data store variables, and signal logging values to your test results. To enable selecting one or more of these options, click Override model settings.

  • States — Include state values between blocks during simulation. You must have a Sequence Viewer block in your model to include state values.

  • Final states — Include final state values. You must have a Sequence Viewer block in your model to include final state values.

  • Output — Include model output values.

  • Data stores — Include logged data store variables in Data Store Memory blocks in the model. This option is selected by default.

  • Signal logging — Include logged signals specified in the model. This option is selected by default. If you selected Log Signal Outputs when you created the harness, all of the output signals for the component under test are logged and returned in test results, even though they are not listed in the Simulation Outputs section. To turn off logging for one of the signals, in the test harness, right-click a signal and select Stop Logging Selected Signals.

For more information, see Capture Simulation Data in a Test Case.

For the corresponding API, see the OverrideModelOutputSettings name-argument pair of setProperty.

Configuration Settings Overrides

For the test case, you can specify configuration settings that differ from the settings in the model. Setting the configuration settings in the test case enables you to try different configurations for a test case without modifying the model. The configuration settings overrides options are:

  • Do not override model settings — Use the current model configuration settings

  • Name — Name of active configuration set. A model can have only one active configuration set. Refresh the list to see all available configuration sets and select the desired one to be active. If you leave the default [Model Settings] as the name, the simulation uses the default, active configuration set of the model.

  • Attach configuration set in a file — Path to the external file (File Location) that contains a configuration set variable. The variable you specify in Variable Name references the name of a configuration set in the file. For information on creating a configuration set, see Simulink.ConfigSet and Save a Configuration Set. For information on configuration set references, see Share a Configuration with Multiple Models.

For the corresponding API, see the ConfigSetOverrideSetting, ConfigSetName, ConfigSetVarName, ConfigSetFileLocation, and ConfigSetOverrideSetting name-argument pairs of setProperty.

Baseline Criteria

The Baseline Criteria section appears in baseline test cases. When a baseline test case executes, Test Manager captures signal data from signals in the model marked for logging and compares them to the baseline data.

Capture Baseline Criteria

To capture logged signal data from the system under test to use as the baseline criteria, click Capture. Then follow the prompts in the Capture Baseline dialog box. Capturing the data compiles and simulates the system under test and stores the output from the logged signals to the baseline. For a baseline test example, see Compare Model Output to Baseline Data.

For the corresponding API, see the captureBaselineCriteria method.

You can save the signal data to a MAT-file or a Microsoft Excel file. To understand the format of the Excel file, see Format Test Case Data in Excel.

You can capture the baseline criteria using the current release for simulation or another release installed on your system. Add the releases you want to use in the Test Manager preferences. Then, select the releases you want available in your test case using the Select releases for simulation option in the test case. When you run the test, you can compare the baseline against the release you created the baseline in or against another release. For more information, see Run Tests in Multiple Releases of MATLAB.

When you select Excel as the output format, you can specify the sheet name to save the data to. If you use the same Excel file for input and output data, by default both sets of data appear in the same sheet.

If you are capturing the data to a file that already contains outputs, specify the sheet name to overwrite the output data only in that sheet of the file.

To save a baseline for each test case iteration in a separate sheet in the same file, select Capture Baselines for Iterations. This check box appears only if your test case already contains iterations. For more information on iterations, see Test Iterations.

Specify Tolerances

You can specify tolerances to determine the pass-fail criteria of the test case. You can specify absolute, relative, leading, and lagging tolerances for individual signals or the entire baseline criteria set.

After you capture the baseline, the baseline file and its signals appear in the table. In the table, you can set the tolerances for the signals. To see tolerances used in an example for baseline testing, see Compare Model Output to Baseline Data.

For the corresponding API, see the AbsTol, RelTol, LeadingTol, and LaggingTol properties of sltest.testmanager.BaselineCriteria.

Add File as Baseline

By clicking Add, you can select an existing file as a baseline. You can add MAT-files and Microsoft Excel files as the baseline. Format Microsoft Excel files as described in Format Test Case Data in Excel.

For the corresponding API, see the addInput method.

Update Signal Data in Baseline

You can edit the signal data in your baseline, for example, if your model changed and you expect different values. To open the signal editor or the Microsoft Excel file for editing, select the baseline file from the list and click Edit. See Manually Update Signal Data in a Baseline.

You can also update your baseline when you examine test failures in the data inspector view. See Examine Test Failures and Modify Baselines.

Equivalence Criteria

This section appears in equivalence test cases. The equivalence criteria is a set of signal data to compare in Simulation 1 and Simulation 2. Specify tolerances to regulate pass-fail criteria of the test. You can specify absolute, relative, leading, and lagging tolerances for the signals.

To specify tolerances, first click Capture to run the system under test in Simulation 1 and add signals marked for logging to the table. Specify the tolerances in the table.

After you capture the signals, you can select signals from the table to narrow your results. If you do not select signals under Equivalence Criteria, running the test case compares all the logged signals in Simulation 1 and Simulation 2.

For an example of an equivalence test case, see Test Two Simulations for Equivalence.

For the corresponding API, see the captureEquivalenceCriteria method.

Iterations

Use iterations to repeat a test with different parameter values, configuration sets, or input data.

  • You can run multiple simulations with the same inputs, outputs, and criteria by sweeping through different parameter values in a test case.

  • Models, external data files, and Test Sequence blocks can contain multiple test input scenarios. To simplify your test file architecture, you can run different input scenarios as iterations rather than as different test cases. You can apply different baseline data to each iteration, or capture new baseline data from an iteration set.

  • You can iterate over different configuration sets, for example to compare results between solvers or data types. You can also iterate over different scenarios in a Test Sequence block.

To create iterations from defined parameter sets, signal editor scenarios, Test Sequence scenarios, external data files, or configuration sets, use table iterations. To create a custom set of iterations from the available test case elements, write a MATLAB iteration script in the test case. For more information about test iterations, see Test Iterations.

For the corresponding API, see sltest.testmanager.TestIteration.

Logical and Temporal Assessments

Create temporal assessments using the form-based editor that prompts you for conditions, events, signal values, delays, and responses. When you collapse the individual elements, the editor displays a readable statement summarizing the assessment. See Assess Temporal Logic by Using Temporal Assessments and Logical and Temporal Assessment Syntax for more information.

Assessment Callback

You can define variables and use them in logical and temporal assessment conditions and expressions in the Assessment Callback section.

Define variables by writing a script in the Assessment Callback section. You can map these variables to symbols in the Symbols pane by right-clicking the symbol, selecting Map to expression, and entering the variable name in the Expression field. For information on how to map variables to symbols, see Map to expression under Resolve Assessment Parameter Symbols.

The Assessment Callback section has access to the predefined variables that contain test, simulation, and model data. You can define a variable as a function of this data. For more information, see Define Variables in the Assessment Callback Section. For the corresponding API methods, see setAssessmentsCallback and getAssessmentsCallback.

Symbol t (time)

The symbol t is automatically bound to simulation time and can be used in logical and temporal assessment conditions. This symbol does not need to be mapped to a variable and is not visible in the Symbols pane. For example, to limit an assessment to a time between 5 and 7 seconds, create a Trigger-response assessment and, in the trigger condition, enter t < 5 & t > 7. To avoid unexpected behavior, do not define a new symbol t in the Symbols pane.

Symbol Data Type

If you map a symbol to a discrete data signal that is linearly interpolated, the interpolation is automatically changed to zero-order hold during the assessment evaluation.

Custom Criteria

This section includes an embedded MATLAB editor to define custom pass/fail criteria for your test. Select function customCriteria(test) to enable the criteria script in the editor. Custom criteria operate outside of model run time; the script evaluates after model simulation.

Common uses of custom criteria include verifying signal characteristics or verifying test conditions. MATLAB Unit Test qualifications provide a framework for verification criteria. For example, this custom criteria script gets the last value of the signal PhiRef and verifies that it equals 0:

% Get the last value of PhiRef from the dataset Signals_Req1_3
lastValue = test.sltest_simout.get('Signals_Req1_3').get('PhiRef').Values.Data(end);

% Verify that the last value equals 0
test.verifyEqual(lastValue,0);

See Process Test Results with Custom Scripts. For a list of MATLAB Unit Test qualifications, see Table of Verifications, Assertions, and Other Qualifications.

You can also define plots in the Custom Criteria section. See Create, Store, and Open MATLAB Figures.

For the corresponding API, see sltest.testmanager.CustomCriteria.

Coverage Settings

Use this section to configure coverage collection for a test file. The settings propagate from the test file to the test suites and test cases in the test file. You can deselect coverage settings for a test suite or test case. The coverage collection options are:

  • Record coverage for system under test — Collects coverage for the model specified as the System Under Test for each test case.

  • Record coverage for referenced models — Collects coverage for models that are referenced from within the specified system under test.

For information on the Coverage Metrics options, see Types of Model Coverage (Simulink Coverage).

Coverage filter files specified in this section override filter files specified in the model configuration settings. For more information, see Collect Coverage in Tests. Coverage is not supported for SIL or PIL blocks.

For the corresponding API, see sltest.testmanager.CoverageSettings.

Test File Options

Close open Figures at the end of execution

When your tests generate figures, select this option to clear the working environment of figures after the test execution completes.

For the corresponding API, see the CloseFigures property of sltest.testmanager.Options.

Store MATLAB figures

Select this option to store figures generated during the test with the test file. You can enter MATLAB code that creates figures and plots as a callback or in the test case Custom Criteria section. See Create, Store, and Open MATLAB Figures.

For the corresponding API, see the SaveFigures property of sltest.testmanager.Options.

Generate report after execution

Select Generate report after execution to create a report after the test executes. Selecting this option displays report options that you can set. The settings are saved with the test file.

Note

To enable the options to specify the number of plots per page, select Plots for simulation output and baseline.

For the corresponding API, see the GenerateReport property of sltest.testmanager.Options.

For detailed reporting information, see Export Test Results and Customize Test Results Reports.

Test File Content

For a MATLAB-based Simulink test, displays the contents of the M file that defines the test. This section appears only if you opened or created a new MATLAB-based Simulink test. See Using MATLAB-Based Simulink Tests in the Test Manager.

See Also

| |