Documentation

### This is machine translation

Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.

# step

Improve linear regression model by adding or removing terms

## Syntax

``NewMdl = step(mdl)``
``NewMdl = step(mdl,Name,Value)``

## Description

````NewMdl = step(mdl)` returns a linear regression model based on `mdl` using stepwise regression to add or remove one predictor.```

example

````NewMdl = step(mdl,Name,Value)` specifies additional options using one or more name-value pair arguments. For example, you can specify the criterion to use to add or remove terms and the maximum number of steps to take.```

## Examples

collapse all

Fit a linear regression model and use `step` to improve the model by adding or removing terms. This example also describes how the `step` function treats a categorical predictor.

Load the `carsmall` data set, and create a table using the `Weight`, `Model_Year`, and `MPG` variables.

```load carsmall tbl1 = table(MPG,Weight); tbl1.Year = categorical(Model_Year);```

Create a linear regression model of `MPG` as a function of `Weight`.

`mdl1 = fitlm(tbl1,'MPG ~ Weight')`
```mdl1 = Linear regression model: MPG ~ 1 + Weight Estimated Coefficients: Estimate SE tStat pValue __________ _________ _______ __________ (Intercept) 49.238 1.6411 30.002 2.7015e-49 Weight -0.0086119 0.0005348 -16.103 1.6434e-28 Number of observations: 94, Error degrees of freedom: 92 Root Mean Squared Error: 4.13 R-squared: 0.738, Adjusted R-Squared: 0.735 F-statistic vs. constant model: 259, p-value = 1.64e-28 ```

Adjust the model to include up to `'quadratic'` terms by using `step`. Specify `'NSteps'` as 5 to allow at most 5 steps of stepwise regression. Specify `'Verbose'` as 2 to display the evaluation process and the decision taken at each step.

`NewMdl1 = step(mdl1,'Upper','quadratic','NSteps',5,'Verbose',2)`
``` pValue for adding Year is 8.2284e-15 pValue for adding Weight^2 is 0.15454 1. Adding Year, FStat = 47.5136, pValue = 8.22836e-15 pValue for adding Weight:Year is 0.0071637 pValue for adding Weight^2 is 0.0022303 2. Adding Weight^2, FStat = 9.9164, pValue = 0.0022303 pValue for adding Weight:Year is 0.19519 pValue for removing Year is 2.9042e-16 ```
```NewMdl1 = Linear regression model: MPG ~ 1 + Weight + Year + Weight^2 Estimated Coefficients: Estimate SE tStat pValue __________ __________ _______ __________ (Intercept) 54.206 4.7117 11.505 2.6648e-19 Weight -0.016404 0.0031249 -5.2493 1.0283e-06 Year_76 2.0887 0.71491 2.9215 0.0044137 Year_82 8.1864 0.81531 10.041 2.6364e-16 Weight^2 1.5573e-06 4.9454e-07 3.149 0.0022303 Number of observations: 94, Error degrees of freedom: 89 Root Mean Squared Error: 2.78 R-squared: 0.885, Adjusted R-Squared: 0.88 F-statistic vs. constant model: 172, p-value = 5.52e-41 ```

`step` creates two indicator variables, `Year_76` and `Year_82`, because `Year` includes three distinct values. `step` does not consider the square terms of indicator variables because the square of an indicator variable is itself.

Because `'Verbose'` is 2, `step` displays the evaluation process:

• `step` computes the p-values for adding `Year` or `Weight^2`. The p-value for `Year` is less than both the p-value for `Weight^2` and the default threshold value of 0.05; therefore, `step` adds `Year` to the model.

• `step` computes the p-values for adding `Weight:Year` or `Weight^2`. Because the p-value for `Weight^2` is less than the p-value for `Weight:Year`, the `step` function adds `Weight^2` to the model.

• After adding the quadratic term, `step` computes the p-value for adding `Weight:Year` again, but the p-value is greater than the threshold value. Therefore, `step` does not add the term to the model. `step` does not examine adding `Weight^3` because of the upper bound specified by the `'Upper'` name-value pair argument.

• `step` looks for terms to remove. `step` already examined `Weight^2`, so it computes only the p-value for removing `Year`. Because the p-value is less than the default threshold value of 0.10, `step` does not remove the term.

• Although the maximum allowed number of steps is 5, `step` terminates the process after two steps because the model does not improve by adding or removing a term.

`step` treats the two indicator variables as one predictor variable and adds `Year` in one step. To treat the two indicator variables as two distinct predictor variables, use `dummyvar` to create separate categorical variables.

```temp_Year = dummyvar(tbl1.Year); Year_76 = temp_Year(:,2); Year_82 = temp_Year(:,3);```

Create a table containing `MPG`, `Weight`, `Year_76`, and `Year_82`.

`tbl2 = table(MPG,Weight,Year_76,Year_82);`

Create a linear regression model of `MPG` as a function of `Weight`, and use `step` to improve the model.

```mdl2 = fitlm(tbl2,'MPG ~ Weight'); NewMdl2 = step(mdl2,'Upper','quadratic','NSteps',5)```
```1. Adding Year_82, FStat = 83.1956, pValue = 1.76163e-14 2. Adding Weight:Year_82, FStat = 8.0641, pValue = 0.0055818 3. Adding Year_76, FStat = 8.1284, pValue = 0.0054157 ```
```NewMdl2 = Linear regression model: MPG ~ 1 + Year_76 + Weight*Year_82 Estimated Coefficients: Estimate SE tStat pValue __________ __________ _______ __________ (Intercept) 38.844 1.5294 25.397 1.503e-42 Weight -0.006272 0.00042673 -14.698 1.5622e-25 Year_76 2.0395 0.71537 2.851 0.0054157 Year_82 19.607 3.8731 5.0623 2.2163e-06 Weight:Year_82 -0.0046268 0.0014979 -3.0888 0.0026806 Number of observations: 94, Error degrees of freedom: 89 Root Mean Squared Error: 2.79 R-squared: 0.885, Adjusted R-Squared: 0.88 F-statistic vs. constant model: 171, p-value = 6.54e-41 ```

The model `NewMdl2` includes the interaction term `Weight:Year_82` instead of `Weight^2`, the term included in `NewMdl1`.

## Input Arguments

collapse all

Linear regression model, specified as a `LinearModel` object created using `fitlm` or `stepwiselm`.

You can use `step` only if you create `mdl` by using `fitlm` with the `'RobustOpts'` name-value pair argument set to the default `'off'`.

### Name-Value Pair Arguments

Specify optional comma-separated pairs of `Name,Value` arguments. `Name` is the argument name and `Value` is the corresponding value. `Name` must appear inside quotes. You can specify several name and value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`.

Example: `'Criterion','aic','Upper','quadratic','Verbose',2` instructs `step` to use the Akaike information criterion, include (at most) the quadratic terms in the model, and display the evaluation process and the decision taken at each step.

Criterion to add or remove terms, specified as the comma-separated pair consisting of `'Criterion'` and one of these values:

• `'sse'`p-value for an F-test of the change in the sum of squared error that results from adding or removing the term

• `'aic'` — Change in the value of Akaike information criterion (AIC)

• `'bic'` — Change in the value of Bayesian information criterion (BIC)

• `'rsquared'` — Increase in the value of R2

• `'adjrsquared'` — Increase in the value of adjusted R2

Example: `'Criterion','bic'`

Model specification describing terms that cannot be removed from the model, specified as the comma-separated pair consisting of `'Lower'` and one of these values:

• A character vector or string scalar naming the model.

ValueModel Type
`'constant'`Model contains only a constant (intercept) term.
`'linear'`Model contains an intercept and linear term for each predictor.
`'interactions'`Model contains an intercept, linear term for each predictor, and all products of pairs of distinct predictors (no squared terms).
`'purequadratic'`Model contains an intercept term and linear and squared terms for each predictor.
`'quadratic'`Model contains an intercept term, linear and squared terms for each predictor, and all products of pairs of distinct predictors.
`'polyijk'`Model is a polynomial with all terms up to degree `i` in the first predictor, degree `j` in the second predictor, and so on. Specify the maximum degree for each predictor by using numerals 0 though 9. The model contains interaction terms, but the degree of each interaction term does not exceed the maximum value of the specified degrees. For example, `'poly13'` has an intercept and x1, x2, x22, x23, x1*x2, and x1*x22 terms, where x1 and x2 are the first and second predictors, respectively.
• A t-by-(p + 1) matrix, or a Terms Matrix, specifying terms in the model, where t is the number of terms and p is the number of predictor variables, and +1 accounts for the response variable. A terms matrix is convenient when the number of predictors is large and you want to generate the terms programmatically.

• A character vector or string scalar representing a Formula in the form

`'Y ~ terms'`,

where the `terms` are in Wilkinson Notation.

Example: `'Lower','linear'`

Data Types: `single` | `double` | `char` | `string`

Maximum number of steps to take, specified as the comma-separated pair consisting of `'NSteps'` and a positive integer.

Example: `'NSteps',5`

Data Types: `single` | `double`

Threshold for the criterion to add a term, specified as the comma-separated pair consisting of `'PEnter'` and a scalar value, as described in this table.

CriterionDefault ValueDecision
`'SSE'`0.05If the p-value of the F-statistic is less than `PEnter` (p-value to enter), add the term to the model.
`'AIC'`0If the change in the AIC of the model is less than `PEnter`, add the term to the model.
`'BIC'`0If the change in the BIC of the model is less than `PEnter`, add the term to the model.
`'Rsquared'`0.1If the increase in the R-squared value of the model is greater than `PEnter`, add the term to the model.
`'AdjRsquared'`0If the increase in the adjusted R-squared value of the model is greater than `PEnter`, add the term to the model.

For more information, see the `Criterion` name-value pair argument.

Example: `'PEnter',0.075`

Threshold for the criterion to remove a term, specified as the comma-separated pair consisting of `'PRemove'` and a scalar value, as described in this table.

CriterionDefault ValueDecision
`'SSE'`0.10If the p-value of the F-statistic is greater than `PRemove` (p-value to remove), remove the term from the model.
`'AIC'`0.01If the change in the AIC of the model is greater than `PRemove`, remove the term from the model.
`'BIC'`0.01If the change in the BIC of the model is greater than `PRemove`, remove the term from the model.
`'Rsquared'`0.05If the increase in the R-squared value of the model is less than `PRemove`, remove the term from the model.
`'AdjRsquared'`-0.05If the increase in the adjusted R-squared value of the model is less than `PRemove`, remove the term from the model.

At each step, the `step` function also checks whether a term is redundant (linearly dependent) with other terms in the current model. When any term is linearly dependent with other terms in the current model, the `step` function removes the redundant term, regardless of the criterion value.

For more information, see the `Criterion` name-value pair argument.

Example: `'PRemove',0.05`

Model specification describing the largest set of terms in the fit, specified as the comma-separated pair consisting of `'Upper'` and one of these values:

• A character vector or string scalar naming the model.

ValueModel Type
`'constant'`Model contains only a constant (intercept) term.
`'linear'`Model contains an intercept and linear term for each predictor.
`'interactions'`Model contains an intercept, linear term for each predictor, and all products of pairs of distinct predictors (no squared terms).
`'purequadratic'`Model contains an intercept term and linear and squared terms for each predictor.
`'quadratic'`Model contains an intercept term, linear and squared terms for each predictor, and all products of pairs of distinct predictors.
`'polyijk'`Model is a polynomial with all terms up to degree `i` in the first predictor, degree `j` in the second predictor, and so on. Specify the maximum degree for each predictor by using numerals 0 though 9. The model contains interaction terms, but the degree of each interaction term does not exceed the maximum value of the specified degrees. For example, `'poly13'` has an intercept and x1, x2, x22, x23, x1*x2, and x1*x22 terms, where x1 and x2 are the first and second predictors, respectively.
• A t-by-(p + 1) matrix, or a Terms Matrix, specifying terms in the model, where t is the number of terms and p is the number of predictor variables, and +1 accounts for the response variable. A terms matrix is convenient when the number of predictors is large and you want to generate the terms programmatically.

• A character vector or string scalar representing a Formula in the form

`'Y ~ terms'`,

where the `terms` are in Wilkinson Notation.

Example: `'Upper','quadratic'`

Data Types: `single` | `double` | `char` | `string`

Control for the display of information, specified as the comma-separated pair consisting of `'Verbose'` and one of these values:

• `0` — Suppress all display.

• `1` — Display the action taken at each step.

• `2` — Display the evaluation process and the action taken at each step.

Example: `'Verbose',2`

## Output Arguments

collapse all

Linear regression model, returned as a `LinearModel` object

To overwrite the input argument `mdl`, assign the new model to `mdl`.

`mdl = step(mdl);`

## More About

collapse all

### Terms Matrix

A terms matrix `T` is a t-by-(p + 1) matrix specifying terms in a model, where t is the number of terms, p is the number of predictor variables, and  +1 accounts for the response variable. The value of `T(i,j)` is the exponent of variable `j` in term `i`.

For example, suppose that an input includes three predictor variables `A`, `B`, and `C` and the response variable `Y` in the order `A`, `B`, `C`, and `Y`. Each row of `T` represents one term:

• `[0 0 0 0]` — Constant term or intercept

• `[0 1 0 0]``B`; equivalently, `A^0 * B^1 * C^0`

• `[1 0 1 0]``A*C`

• `[2 0 0 0]``A^2`

• `[0 1 2 0]``B*(C^2)`

The `0` at the end of each term represents the response variable. In general, a column vector of zeros in a terms matrix represents the position of the response variable. If you have the predictor and response variables in a matrix and column vector, then you must include `0` for the response variable in the last column of each row.

### Formula

A formula for model specification is a character vector or string scalar of the form ```'Y ~ terms'```.

• `Y` is the response name.

• `terms` represents the predictor terms in a model using Wilkinson notation.

For example:

• `'Y ~ A + B + C'` specifies a three-variable linear model with intercept.

• `'Y ~ A + B + C – 1'` specifies a three-variable linear model without intercept. Note that formulas include a constant (intercept) term by default. To exclude a constant term from the model, you must include `–1` in the formula.

### Wilkinson Notation

Wilkinson notation describes the terms present in a model. The notation relates to the terms present in a model, not to the multipliers (coefficients) of those terms.

Wilkinson notation uses these symbols:

• `+` means include the next variable.

• `–` means do not include the next variable.

• `:` defines an interaction, which is a product of terms.

• `*` defines an interaction and all lower-order terms.

• `^` raises the predictor to a power, exactly as in `*` repeated, so `^` includes lower-order terms as well.

• `()` groups terms.

This table shows typical examples of Wilkinson notation.

Wilkinson NotationTerm in Standard Notation
`1`Constant (intercept) term
`A^k`, where `k` is a positive integer`A`, `A2`, ..., `Ak`
`A + B``A`, `B`
`A*B``A`, `B`, `A*B`
`A:B``A*B` only
`–B`Do not include `B`
`A*B + C``A`, `B`, `C`, `A*B`
`A + B + C + A:B``A`, `B`, `C`, `A*B`
`A*B*C – A:B:C``A`, `B`, `C`, `A*B`, `A*C`, `B*C`
`A*(B + C)``A`, `B`, `C`, `A*B`, `A*C`

Statistics and Machine Learning Toolbox™ notation always includes a constant term unless you explicitly remove the term using `–1`.

For more details, see Wilkinson Notation.

## Algorithms

• Stepwise regression is a systematic method for adding and removing terms from a linear or generalized linear model based on their statistical significance in explaining the response variable. The method begins with an initial model, specified using `modelspec`, and then compares the explanatory power of incrementally larger and smaller models.

The `step` function uses forward and backward stepwise regression to determine a final model. At each step, the function searches for terms to add to the model or remove from the model based on the value of the `'Criterion'` name-value pair argument.

The default value of `'Criterion'` for a linear regression model is `'sse'`. In this case, `stepwiselm` and `step` of `LinearModel` use the p-value of an F-statistic to test models with and without a potential term at each step. If a term is not currently in the model, the null hypothesis is that the term would have a zero coefficient if added to the model. If there is sufficient evidence to reject the null hypothesis, the function adds the term to the model. Conversely, if a term is currently in the model, the null hypothesis is that the term has a zero coefficient. If there is insufficient evidence to reject the null hypothesis, the function removes the term from the model.

Stepwise regression takes these steps when `'Criterion'` is `'sse'`:

1. Fit the initial model.

2. Examine a set of available terms not in the model. If any of the terms have p-values less than an entrance tolerance (that is, if it is unlikely a term would have a zero coefficient if added to the model), add the term with the smallest p-value and repeat this step; otherwise, go to step 3.

3. If any of the available terms in the model have p-values greater than an exit tolerance (that is, the hypothesis of a zero coefficient cannot be rejected), remove the term with the largest p-value and return to step 2; otherwise, end the process.

At any stage, the function will not add a higher-order term if the model does not also include all lower-order terms that are subsets of the higher-order term. For example, the function will not try to add the term `X1:X2^2` unless both `X1` and `X2^2` are already in the model. Similarly, the function will not remove lower-order terms that are subsets of higher-order terms that remain in the model. For example, the function will not try to remove `X1` or `X2^2` if `X1:X2^2` remains in the model.

The default value of `'Criterion'` for a generalized linear model is `'Deviance'`. `stepwiseglm` and `step` of `GeneralizedLinearModel` follow a similar procedure for adding or removing terms.

You can specify other criteria by using the `'Criterion'` name-value pair argument. For example, you can specify the change in the value of the Akaike information criterion, Bayesian information criterion, R-squared, or adjusted R-squared as the criterion to add or remove terms.

Depending on the terms included in the initial model, and the order in which the function adds and removes terms, the function might build different models from the same set of potential terms. The function terminates when no single step improves the model. However, a different initial model or a different sequence of steps does not guarantee a better fit. In this sense, stepwise models are locally optimal, but might not be globally optimal.

• `step` treats a categorical predictor as follows:

• A model with a categorical predictor that has L levels (categories) includes L – 1 indicator variables. The model uses the first category as a reference level, so it does not include the indicator variable for the reference level. If the data type of the categorical predictor is `categorical`, then you can check the order of categories by using `categories` and reorder the categories by using `reordercats` to customize the reference level.

• `step` treats the group of L – 1 indicator variables as a single variable. If you want to treat the indicator variables as distinct predictor variables, create indicator variables manually by using `dummyvar`. Then use the indicator variables, except the one corresponding to the reference level of the categorical variable, when you fit a model. For the categorical predictor `X`, if you specify all columns of `dummyvar(X)` and an intercept term as predictors, then the design matrix becomes rank deficient.

• Interaction terms between a continuous predictor and a categorical predictor with L levels consist of the element-wise product of the L – 1 indicator variables with the continuous predictor.

• Interaction terms between two categorical predictors with L and M levels consist of the (L – 1)*(M – 1) indicator variables to include all possible combinations of the two categorical predictor levels.

• You cannot specify higher-order terms for a categorical predictor because the square of an indicator is equal to itself.

Therefore, if `step` adds or removes a categorical predictor, the function actually adds or removes the group of indicator variables in one step. Similarly, if `step` adds or removes an interaction term with a categorical predictor, the function actually adds or removes the group of interaction terms including the categorical predictor.

Download ebook