Documentation

# linearlayer

Linear layer

## Syntax

```linearlayer(inputDelays,widrowHoffLR) ```

## Description

Linear layers are single layers of linear neurons. They may be static, with input delays of 0, or dynamic, with input delays greater than 0. They can be trained on simple linear time series problems, but often are used adaptively to continue learning while deployed so they can adjust to changes in the relationship between inputs and outputs while being used.

If a network is needed to solve a nonlinear time series relationship, then better networks to try include `timedelaynet`, `narxnet`, and `narnet`.

`linearlayer(inputDelays,widrowHoffLR)` takes these arguments,

 `inputDelays` Row vector of increasing 0 or positive delays (default = 0) `widrowHoffLR` Widrow-Hoff learning rate (default = 0.01)

and returns a linear layer.

If the learning rate is too small, learning will happen very slowly. However, a greater danger is that it may be too large and learning will become unstable resulting in large changes to weight vectors and errors increasing instead of decreasing. If a data set is available which characterizes the relationship the layer is to learn, the maximum stable learning rate can be calculated with `maxlinlr`.

## Examples

### Create and Train a Linear Layer

Here a linear layer is trained on a simple time series problem.

```x = {0 -1 1 1 0 -1 1 0 0 1}; t = {0 -1 0 2 1 -1 0 1 0 1}; net = linearlayer(1:2,0.01); [Xs,Xi,Ai,Ts] = preparets(net,x,t); net = train(net,Xs,Ts,Xi,Ai); view(net) Y = net(Xs,Xi); perf = perform(net,Ts,Y) ```
```perf = 0.2396 ``` 