Unexpected Bayesian Regularization performance
6 ビュー (過去 30 日間)
古いコメントを表示
I'm training a network to learn the sin function from a noisy input of 400 samples.
If I use a 1-30-1 feedforwardnet with 'trainlm' the network generalises well. If I use a 1-200-1 feedforwardnet the network overfits the training data, as expected. My understanding was that 'trainbr' on a network with too many neurons will not overfit. However if I run trainbr on a 1-200-1 network until convergence (Mu reaches maximum), the given network seems to overfit the data despite a strong reduction in "Effective # Param".
To me this is a strange behaviour. Have I misunderstood bayesian regularization? Can someone provide an explanation?
I can post my code if necessary, however first I want to know if the following is correct:
'trainbr' will not overfit with large networks if run to convergence
Thanks
2 件のコメント
Greg Heath
2020 年 8 月 22 日
編集済み: Greg Heath
2020 年 8 月 22 日
How many periods are covered by the 400 samples?
What minimum number of samples per period are necessary?
Greg
回答 (1 件)
Shubham Rawat
2020 年 8 月 28 日
Hi Jonathan,
Given your dataset and number of neurons it might be possible that your model is overfitting.
I have reproduced your code with 20 neurons and "trainbr" training funtion and it is giving me these results attached here. With Effective # Param = 18.6.
0 件のコメント
参考
カテゴリ
Help Center および File Exchange で Modeling and Prediction with NARX and Time-Delay Networks についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!