I developed a NARX network for modeling a UASB reactor and predicted for three different output parameters for 11 timesteps with one-step ahead approach. While some of the predictions are well within range, some of them show unacceptable levels of difference between target and output. I used different combination in both hidden layers and delaysizes. The results did not improve. Should i incorporate something else into the code to improve the training of the neural network?? or improve the results using a filter (Kalman etc.) or use a different model (Neuro-Fuzzy or Hybrid) altogether to solve the problem?? Configuration of the network is 5-12-12-3 Training dataset consists of set of data at 100 timesteps.

 採用された回答

Greg Heath
Greg Heath 2014 年 7 月 31 日

1 投票

One hidden layer is sufficient
net = narxnet(ID,FD,H)
For details, search using
greg narxnet
and
greg narx
Use the significant lags of the target autocorrelation function and the target/input crosscorrelation function to determine ID and FD.
Determine the upperbound for number of hidden nodes, Hub that guarantees the number of training equations, Ntrneq, exceeds the number of unknown weights, Nw.
Use 'divideblock' or 'divideind' to preserve the spacing between data points.
For fixed ID,FD find the minimum value for H that will yield satisfactory performance. If H << Hub is not satisfied, use a validation set or regularization (msereg, trainbr) to prevent overtraining an overfit net.
Initialize the RNG so that designs can be duplicated.
Normalize MSE by the average target variance MSE00 = mean(var(t',1)) to obtain a scale-free performance measure NMSE.
Use the training record tr to divide performance into trn/val/tst components.
Hope this helps.
Thank you for formally accepting my answer
Greg

7 件のコメント

Atiyo Banerjee
Atiyo Banerjee 2014 年 7 月 31 日
Sir, Reducing the number of hidden layers and changing the performance function , data division for training, validation and testing wont be a problem and i am using the rng to exactly duplicate network designs. Please elaborate how i can find the ID and FD using your approach.
Greg Heath
Greg Heath 2014 年 8 月 1 日
No need to change the performance function for training. Just normalize for presentation and explanation of results.
I have posted many examples. Search the NEWSGROUP and ANSWERS using
greg narxnet nncorr
and/or
greg narxnet fft
Greg
Atiyo Banerjee
Atiyo Banerjee 2014 年 9 月 29 日
Sir, I tried to improve the performance of the network by using a single hidden layer instead of two. Also, using a delay value of 2 is yielding highest peaks in the correlation curve. But the predictions are not satisfactory and the training is getting stopped very early in the process. What should i do to improve the predictions?
Greg Heath
Greg Heath 2014 年 9 月 29 日
You want to choose some or all of the significant peaks in both the target/target autocorrelation and the target/input crosscorrelation. Input and feedback lags do not have to be identical. The threshold for a 95% significance level can be estimated from 100 or more repetitions of the noise/noise cross-correlation function.
greg narxnet nncorr % search words
Is this what you did?
I=5, O=3, Ntrn = 0.7*N =70, Ntrneq = Ntrn*O = 210
Hub = -1 + ceil(Ntrneq-O)/(I+O+1) = -1+ceil(207/9)= 22
H = Hmin:dH:Hmax = ?
Ntrials = 10? %How many designs for each value of H?
What is causing the premature stopping?
tr.stop = ?
Would help if you posted your code.
Greg
Image Analyst
Image Analyst 2014 年 10 月 2 日
編集済み: Image Analyst 2014 年 10 月 2 日
Atiyo's "Answer" moved here since it's a comment to Greg and not an "Answer" to the original question.
Yes I did that exactly, and the training is stopping at the first epoch for most of the designs.
This is the code:
% Solve an Autoregression Problem with External Input with a NARX Neural Network
% Script generated by NTSTOOL
% Created Wed Oct 02 22:26:03 CEST 2013
%
% This script assumes these variables are defined:
u= xlsread('inputs111.xls');
y= xlsread('outputs111.xls');
x = tonndata(u,true,false);
t = tonndata(y,true,false);
% Create a Nonlinear Autoregressive Network with External Input
inputSeries= x(1:100);
xn= x(101:end);
targetSeries = t(1:100);
inputDelays = 1:5;
feedbackDelays = 1:5;
hiddenLayerSize = 6;
net = narxnet(inputDelays,feedbackDelays,hiddenLayerSize);
% Prepare the Data for Training and Simulation
% The function PREPARETS prepares timeseries data for a particular network,
% shifting time by the minimum amount to fill input states and layer states.
% Using PREPARETS allows you to keep your original time series data unchanged, while
% easily customizing it for networks with differing numbers of delays, with
% open loop or closed loop feedback modes.
[inputs,inputStates,layerStates,targets] = preparets(net,inputSeries,{},targetSeries);
% Setup Division of Data for Training, Validation, Testing
net.divideFcn = 'divideblock';
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
net.performFcn = 'mse';
rng(1600)
Ntrials=35;
for i = 1:Ntrials
state(i) = rng
net=init(net);
% Train the Network
[net,tr] = train(net,inputs,targets,inputStates,layerStates);
tr=tr;
% Test the Network
outputs = net(inputs,inputStates,layerStates);
errors = gsubtract(outputs,targets);
errors2= cell2mat(errors);
errors3= (abs(errors2)/y(6:100))*100;
performance = perform(net,targets,outputs)
% Recalculate Training, Validation and Test Performance
trainTargets = gmultiply(targets,tr.trainMask);
valTargets = gmultiply(targets,tr.valMask);
testTargets = gmultiply(targets,tr.testMask);
trainPerformance = perform(net,trainTargets,outputs);
valPerformance = perform(net,valTargets,outputs);
testPerformance = perform(net,testTargets,outputs);
training_error=tr.best_perf
validation_error=tr.best_vperf
test_error=tr.best_tperf
r=regression(targets,outputs)
nets = removedelay(net);
nets.name = [net.name ' - Predict One Step Ahead'];
[xs,xis,ais,ts] = preparets(nets,inputSeries,{},targetSeries);
ys = nets(xs,xis,ais);
earlyPredictPerformance = perform(nets,ts,ys);
errors1=gsubtract(ys(96),y(101));
%557th day's output is actually 561th day's output because of 561-557=4
%delay is calculated
errors4=cell2mat(errors1);
%cell to matrix change
errors5=(abs(errors4)/y(101))*100;
%divide by output value of that day to get percentage error
end
for i=100:110
inputSeries = x(1:i);
xnh=x(i+1:end);
targetSeries = t(1:i);
toPredict = t(i+1);
[inputs,inputStates,layerStates,targets] = preparets(net,inputSeries,{},targetSeries);
outputs = net(inputs,inputStates,layerStates);
errors = gsubtract(outputs, targets);
errors2=cell2mat(errors);
errors3=(abs(errors2)/y(6:i))*100;
%performance = perform(net,targets,outputs)
nets = removedelay(net);
nets.name = [net.name ' - Predict One Step Ahead'];
%view(nets);
[xs,xis,ais,ts] = preparets(nets,inputSeries,{},targetSeries);
ys = nets(xs,xis,ais);
earlyPredictPerformance = perform(nets,ts,ys);
errors1=gsubtract(ys(i-4),y(i+1));
errors4=cell2mat(errors1);
i+1
errors=(abs(errors4)/y(i+1))*100
end
as= ys(96:106);
at= cell2mat(as);
% Train the Network
Greg Heath
Greg Heath 2014 年 10 月 4 日
編集済み: Greg Heath 2014 年 10 月 4 日
Your response "Yes I did that exactly" is curious because I do not see any of that above. Most importantly, you are complaining about premature termination yet you did not answer my very relevant question
tr.stop = ?
Why would you use the command tr=tr with a semicolon???
Also you originally metioned I = 5 and O=3. However, your code is SISO.
What is N?
I think you need to find the MATLAB data that can most demonstrate your problem.
help nndata
Then we can compare results.
Greg
Greg Heath
Greg Heath 2014 年 10 月 4 日
What is the result of the command "whos" in the following
u= xlsread('inputs111.xls');
y= xlsread('outputs111.xls');
x = tonndata(u,true,false);
t = tonndata(y,true,false);
whos
Please apply your code to the pollution_data set. Although
size(input) = [ 8 508 ]
size(target = [ 3 508 ]
If you wish, you can only use I = 5 to match your data.
Greg

サインインしてコメントする。

その他の回答 (0 件)

カテゴリ

ヘルプ センター および File ExchangeDeep Learning Toolbox についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by