I need a starting point for choosing "spread" when using newrb()

1 回表示 (過去 30 日間)
Shadan
Shadan 2014 年 4 月 24 日
コメント済み: Shadan 2014 年 4 月 29 日
My data sets consist of 62 inputs and one output and I want to do function approximation. I understand that the optimum "spread" value is usually determined by trial and error. However, I was wondering if there is any way of approximating this value ( just to get a sense of its greatness )? My second question is regarding the minimum number of training samples required when using newrb. Is it just like the feedforward neural networks, the more the better?
Thank you for your support

採用された回答

Greg Heath
Greg Heath 2014 年 4 月 28 日
編集済み: Greg Heath 2014 年 4 月 28 日
If you standardize inputs (zscore or mapstd) the unity default is a good starting place.
The best generalization performance comes from using as few hidden neurons as possible.
Search the neural net literature (e.g., comp.ai.neural-nets FAQ) using the terms
overfitting
overtraining

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeDeep Learning Toolbox についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by