フィルターのクリア

problem with neural network training

2 ビュー (過去 30 日間)
Mohamad
Mohamad 2013 年 12 月 15 日
編集済み: KAE 2019 年 5 月 15 日
I have read in some references that if we add small and different random noises to the neural network input data at each epoch of the learning process the generalization of this net will improve(jitter). I would like to implement this but since I do not know the number of epochs beforehand I have to check the convergence of my net after every epoch which makes the problem too complicated. Do you have any suggestion to solve this problem? best

採用された回答

Greg Heath
Greg Heath 2013 年 12 月 17 日
Adding noise after each epoch does not sound like a very productive method.
Given the number of hidden nodes, design many nets in a double loop. The outer loop varies the level of noise added to the training data. The inner loop is used to design Ntrials nets with different random initial weights.
One or more good designs can be obtained from the numlevel*Ntrials candidates using the validation set error as a measure of performance. Final unbiased estimates of generalization performance are then obtained from the test set performances.
Hope this helps.
Thank you for formally accepting my answer
Greg
  1 件のコメント
KAE
KAE 2019 年 5 月 15 日
編集済み: KAE 2019 年 5 月 15 日
I have seen the technique of adding noise to inputs listed as a technique for data 'augmentation'. So I wanted to confirm: I will still have the same number N of inputs, just with a given level of noise added to them in each loop, correct? Rather than for example 2N inputs which concatenates the data with and without the noise?

サインインしてコメントする。

その他の回答 (0 件)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by