Using different learning algorithms for the neural net toolkit

How do I go about implementing a genetic algorithm (for example) tool to minimise the weights for a neural network to find the global maxima? I'm worried that the built in trainers are not adequate to find the global minima.

 採用された回答

Greg Heath
Greg Heath 2016 年 7 月 28 日
編集済み: Greg Heath 2016 年 8 月 5 日

0 投票

You may be worrying about the wrong thing. With a typical I-H-O FFnet the number of equivalent nets obtained by just changing weight signs and index order is ~
2^H * factorial(H) (= 3.7159e+09 for the
default H=10)
I find the best bet is to just find one of the many nets that MINIMIZE THE NUMBER OF HIDDEN NODES subject to the following maximum bound on mean-square-error.
MSE = mse(error) <= 0.001*MSE00
where
error = target - output;
MSE00 = mean(var(target',1)) % Average target variance
The resulting bounds on normalized MSE and Rsquare (Google R squared) are
NMSE = MSE/MSE00 <= 0.001
Rsq = 1 - NMSE >= 0.999
which is interpreted as successfully modelling more than 99.9% of the target variance.
Initial weights are random. Therefore it is wise to make a double loop search over number of hidden nodes and initial random number states. I have posted zillions of examples in both the NEWSGROUP and ANSWERS. Good search words are
greg Hmin Hmax Ntrials
If you insist on using a genetic algorithm see my post
Hope this helps.
Thank you for formally accepting my answer
Greg

その他の回答 (0 件)

カテゴリ

ヘルプ センター および File ExchangeDeep Learning Toolbox についてさらに検索

質問済み:

2016 年 7 月 28 日

編集済み:

2016 年 8 月 5 日

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by