Optimization of dimensions of hidden layer in neural network
3 ビュー (過去 30 日間)
古いコメントを表示
Hello all
I want to optimize the number of neurons in 3 hidden layers which I've used in my neural network. Is there any way (apart from applying 3 nested for loops and checking the test performance for each of them ) so that I can know the optimized dimensions of all the three layers?
My Input vector is (208X200) and target is (5x200).
Please help me!
0 件のコメント
採用された回答
Greg Heath
2014 年 5 月 31 日
編集済み: Greg Heath
2014 年 5 月 31 日
There is no a priori way to optimize the number of hidden neurons for 1 hidden layer, much less 3. However, you can get a good estimate for the minimum number of the former via trial and error. Increasing the number of hidden layers tends to reduce the total number of hidden neurons. So, maybe a first step would be to design a single hidden layer model first.
A priori information can help, especially with classification where it is known that each class consists of a number of known subclasses. Then a divide and conquer approach can be followed. I have only used this with elliptical basis functions (most of the time with radial basis functions). A first step in this case could be the clustering of each class into subclasses. I can't say much more without revealing proprietary info.
Both clustering and principal component decompositions help understand the data. Look at those first before determining how to construct a divide and conquer approach.
Also take a look at cascade correlation.
1 件のコメント
Greg Heath
2014 年 6 月 1 日
I just noticed your input dimensions of [ 208 200 ]. If you use the default data division ratios Ntrn = 140.
Do you really expect to get reliable performance when you are trying to define a 208 dimensional space with 140 vectors?
Reduce the input dimensionality and/or get more data.
その他の回答 (0 件)
参考
カテゴリ
Help Center および File Exchange で Deep Learning Toolbox についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!