how to fix constant iteration in neural networks

2 ビュー (過去 30 日間)
Thirunavukkarasu
Thirunavukkarasu 2014 年 9 月 16 日
コメント済み: Greg Heath 2017 年 12 月 22 日
when i am trying to train my neural network using levenberg marquardt algorithm it shows different iteration at each times how do i fix my neural network with constant iteration period
  3 件のコメント
Thirunavukkarasu
Thirunavukkarasu 2014 年 9 月 19 日
epochs
Shreeja Shetty
Shreeja Shetty 2017 年 7 月 20 日
I am currently facing a similar issue as mentioned above. Can someone please provide a solution.

サインインしてコメントする。

採用された回答

Greg Heath
Greg Heath 2014 年 9 月 19 日
When you train your net again, the random number generator is in a different state. Therefore you will have a different trn/val/tst split AND a different set of initial weights. The training will stop according to one of several stopping rules including
1. performance goal achieved
2. maximum epochs reached
3. minimum gradient achieved
4. maximum mu reached
5. validation stop (validation performance reaches a local maximum)
[ net tr y e ] = train(net,x,t) % e=t-y
stopcriterion = tr.stop
or, if you are training in a double for loop
stopcriteria{i,j} = tr.stop
This is great because all are chosen to optimize your performance. That is why every time I try a new candidate for H=number of hidden nodes, I design at least Ntrials = 10 nets. So, if I am considering 10 different values for H, I will have 100 designs which I summarize in 3 10 by10 matrices for training, validation and test performance.
The best net is determined from the nontraining validation set performance (smaller values of H are preferred) and an unbiased estimate of unseen nontraining data performance is obtained from the test set performance.
Hope this helps.
Thank you for officially choosing my answer
Greg

その他の回答 (2 件)

Greg Heath
Greg Heath 2014 年 9 月 17 日
If you train multiple nets in a loop you can duplicate previous runs by keeping track of the state of the random number generator. That is why I always specify an initial random number state, before the outer loop. For examples, search on
greg rng(0)
or
greg rng('default')
Hope this helps.
Thank you for formally accepting my answer
Greg
  5 件のコメント
Parul Singh
Parul Singh 2017 年 4 月 26 日
rng default- net gives different outputs each time it is run
rng (variable number) - number of iterations remain the same at 1000
We want to vary the number of iterations to achieve best output and then for a constant number of iterations, we want the network to get the same output each time it is nrun.
Please help.
Greg Heath
Greg Heath 2017 年 7 月 20 日
Given what I have learned in 37 years of NN design, what you want to do is illogical. Please reread what I have written.
Greg

サインインしてコメントする。


Cesare Trematore
Cesare Trematore 2017 年 12 月 19 日
I do not know if I fully agree. I was running a pattern recognition neural network with the trainbr option. The train perfomance kept improving up to 1000 epochs, but after about 200 epochs the test perfomance started worsening. In this cases would be useful to have the option to stop the training after a prefixed number of epochs.
  1 件のコメント
Greg Heath
Greg Heath 2017 年 12 月 22 日
That option is available.
However, why in the world are you using trainbr for pattern recognition?
What happens when you use patternet with all defaults except number of hidden nodes and initial RNG state?
Search the NEWSGROUP and ANSWERS with
greg patternnet
Hope this helps.
Greg

サインインしてコメントする。

カテゴリ

Help Center および File ExchangePattern Recognition and Classification についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by