How can I predict a future value from a time series using the Neural Network Toolbox 6.0.3 (R2009b)?

12 ビュー (過去 30 日間)
I am trying to use MATLAB and the Neural Network Toolbox to predict a future value from a time series. I am trying to use a recurrent network (Elman), but am unable to get it working. I think I misinterpreted the example and am not providing the right input data.
For example, I am trying to predict the next value in a series:
x1 x2 x3 x4 x5 -> x6
x2 x3 x4 x5 x6 -> x7

採用された回答

MathWorks Support Team
MathWorks Support Team 2011 年 11 月 8 日
If you happen to know that "x(T)" is predictable, given:
x(T-N) ... x(T-1)
The best way to solve the problem is simply to provide those values to a feed-forward network as you have done.
We recommend the Elman network for problems where an unknown number of past data points are needed to properly predict the next data point. The advantage of the Elman network is that it can use its recurrent hidden layer to store useful information about data points far in the past.
There are a few drawbacks with this method. First, the job an Elman network has to perform is more difficult than that of a feed-forward network. The feed-forward network is presumably given enough information to make a good prediction. The Elman network not only has to make the prediction, but has to figure out what information is necessary to keep about past data points in order to make a prediction. Thus, more neurons are needed in the hidden layer.
Second, the internal state produced by the hidden layer is largely dependent on its initial weights and biases. Ensuring that there is enough variation in weights and biases to form a precise enough representation of past inputs for the problem also requires that even more neurons be allocated to the hidden layer.
There are also a couple of practical reasons that feed-forward networks will give you a better result when using the current version of the Neural Network Toolbox. You can now train feed-forward networks with the Levenberg-Marquardt algorithm, saving a lot of time. The Elman network also currently trains on a single time series.
Here is the code if you do decide that an Elman network is appropriate for your problem:
% Target Vector
T = [3 9 12 4 10 13 5];
% Input Vector
delaymat = toeplitz([3 0],T);
P = delaymat(2,:);
% Number of hidden neurons
S1 = 20;
% Create Network
net = newelm(P,T,S1);
% Train Network
Pseq = con2seq(P);
Tseq = con2seq(T);
net = train(net,Pseq,Tseq);
% Test the network
Y = sim(net,Pseq);
z = seq2con(Y);
diff1 = T - z{1,1};
One way around the single-series limitation is to append separate time series to each other. If the series are long, the fact that prediction is impossible (between the end of one series and the beginning of the next) should not harm learning any more than that from noise. The network will learn the predictable parts and simply settle for high errors where the series are joined, satisfying the initial goal.

その他の回答 (1 件)

Tish Sheridan
Tish Sheridan 2018 年 4 月 10 日
If you'd like to try a deep learning solution, take a look at this example: https://uk.mathworks.com/help/nnet/examples/time-series-forecasting-using-deep-learning.html

製品


リリース

R2009b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by