Optimal hidden nodes number
2 ビュー (過去 30 日間)
古いコメントを表示
Hello everybody,
In order to determine optimal hidden neurons, Trial and error algorithm has been used (trial = 10, 10 < H < 100, dH = 100). I get the table on top but i can not determine the optimal hidden neurons. The table contains (Trials, Hidden neurons, test_mse, train_mse, val_mse, test_R, train_R, val_R)
Please i need your help. Thank you.
0 件のコメント
採用された回答
Greg Heath
2018 年 1 月 25 日
I have posted hundreds of examples in both the NEWSGROUP (comp.soft-sys.matlab) and ANSWERS that determine the optimal number of hidden nodes defined by
1. One Hidden Layer (ALWAYS SUFFICIENT!!!)
2. Minimum Number of Hidden Nodes subject to my
practicality constraint
TRAINING SUBSET RSQUARE >= 0.99
i.e.
99% of the training subset target variance is
successfully modeled by the net.
Equivalently
TRAINING SUBSET MSE <= 0.01*TRAINING SUBSET VARIANCE
3. COMMENTS & CAVEATS
a. The training subset must be a good representative of
validation and test data
b. A smaller number of hidden nodes can often be obtained
by using multiple hidden layers
c. The MSE minimization technique used for regression and
curvefitting (e.g., via FITNET)is also successful for classification
and pattern recognition (e.g., via PATTERNNET) where the
minimization function is cross-entropy and the desired result is
minimal error rate.
4. Suggested NEWSGROUP and ANSWERS search words for either FITNET or PATTERNNET
greg fitnet/patternnet msegoal nmse
5. The method is also used for timeseries
Hope this helps.
Thank you for formally accepting this answer
Greg
0 件のコメント
その他の回答 (2 件)
Greg Heath
2018 年 1 月 28 日
BASIC MATLAB NN DESIGN ASSUMPTIONS
The summary statistics of the Training, Validation and Test subsets are satisfactorially similar.
Training data is used to estimate net parameters
Validation data is used to verify ability to generalize (i.e., ability to obtain satisfactory performance on nontraining data)
Test data is used to obtain unbiased estimates of performance on non-design (including unseen) data
Overfitting occurs when the number of training parameters to be estimated exceeds the number of training equations
Overtraining occurs when the training exceeds the point at which the trend of the nontraining error is decreasing.
Normalized Mean Square error and Rsquare (Rsquare = 1-NMSE) tend to be sufficient for characterizing nonclassifier performance.
The normalization denominator for NMSE = MSE/MSEref is the minimum MSE for a constant output model. The minimizing constant output and corresponding MSEref are
y = mean(t,2)
MSEref = mse(t-mean(t,2)) = mean(variance(t'),1)
Crossentropy is the default minimization quantity for MATLAB classifiers. However, the ultimate minimization goal is classification error rate.
Hope this helps.
Thank you for formally accepting my answer
Greg
参考
カテゴリ
Help Center および File Exchange で Sequence and Numeric Feature Data Workflows についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!