Scaled Conjugate Gradient - NN toolbox
古いコメントを表示
Hi,
I have used MATLAB's 'trainscg' with 'mse' as the performance function and NETLAB's 'scg' with 'mse' as the performance function for the same training data set and still don't obtain the same generalisation on a set of other data files I have.
I have used same the same Nguyen Widrow initialisation method for weight and bias initialisation. Used the same 'dividerand' method to split the data sets into training, validation and testing data.
I know the difference could be in the various parameters used. In the original paper, http://www.sciencedirect.com/science/article/pii/S0893608005800565; the lambda values are specified not as exact values but as inequalities. I have used values that don't violate the rules laid down by the author.
Also, one thing that seems a bit bizarre to me is that MATLAB stops the learning in just 23 epochs but NETLAB exceeds maximum iterations. I understand stopping criteria may be different.
Is there anyone there who has worked on both of these toolboxes and found a way of establishing same results from both of them? I want some general ideas and tips to making SCG give similar results to MATLAB's TRAINSCG.
Any help, advise will be greatly appreciated.
Thank you. Pooja
採用された回答
その他の回答 (1 件)
saba momeni
2019 年 2 月 1 日
0 投票
Hi everyone
I am training my feedfoward neural network. with scale conjugate gradient.
I am not sure that scale conjugate gradient dose optimization in bach or with mini-batch training?
I just specify the Lambada and the Sigma for it , no size of batch.
I appreciate your answer.
Cheers
S
カテゴリ
ヘルプ センター および File Exchange で Deep Learning Toolbox についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!