フィルターのクリア

trainbr returns best network by performance not the best regularized

5 ビュー (過去 30 日間)
Jens Geisler
Jens Geisler 2023 年 12 月 6 日
編集済み: Jens Geisler 2023 年 12 月 15 日
Using trainbr in R2022b for a feedforwardnet should return the network with the best regularization. However, it seems that the best network by performance is returned. The performance in the following example is lowest in epoch 2 (as indicated by tr.best_epoch) and the returned net seem to be from this epoch (when I set net.trainParam.epochs=2 the same net results). This network is not very much regularized and the optimization process continues for another 998 epochs and ends with a "Effective # of Parameters" of roughly 4 which the result returned does not reflect at all.
If I set net.trainParam.max_fail=5; I can get train to return the net from epoch 19 which is nuch more regularized.
Long story short, I think trainbr is buggy and returns the wrong net.
rng(0)
% load data
[X, T_] = simplefit_dataset;
% resample data and apply noise
X= X(1:3:end);
T_= T_(1:3:end);
T= T_ + randn(size(T_));
% network with too many neurons
net= feedforwardnet(30, 'trainbr');
% net.trainParam.epochs=2;
% net.trainParam.max_fail=5;
[net, tr] = train(net, X, T);
% display results
Y = sim(net, X);
figure(1)
clf
hold on
plot(X, T_, 'DisplayName', 'real')
plot(X, Y, 'DisplayName', 'model')
plot(X(tr.trainInd), T(tr.trainInd), '.', 'DisplayName', 'Training')
plot(X(tr.valInd), T(tr.valInd), 'o', 'DisplayName', 'Validation')
plot(X(tr.testInd), T(tr.testInd), '*', 'DisplayName', 'Test')
hold off
grid on
legend

回答 (1 件)

Akshat
Akshat 2023 年 12 月 15 日
Hi Jens,
As per my understanding of your question, you need an explanation of why the “trainbr” function is not giving the optimum results without enabling the “net.trainParam.max_fail”.
I ran your code on my end and found out that when “net.trainParam.max_fail” is not enabled, there is no validation set for checking for the early stopping. PFA the images for the two cases, one where it is enabled and one where it isn’t.
To check these graphs, press the performance button after running the code.
Case 1 (no early stopping):
Case 2 (early stopping):
Upon further research, I found out the reason why this is the case. Please refer to this MATLAB Answer to get more idea of the same: https://www.mathworks.com/matlabcentral/answers/405727-why-does-the-trainbr-function-not-require-a-validation-dataset
Hope this helps.
Regards
Akshat
  1 件のコメント
Jens Geisler
Jens Geisler 2023 年 12 月 15 日
編集済み: Jens Geisler 2023 年 12 月 15 日
Hi Akshat,
thanks for your quick reply. Unfortunately it doesn't answer my question. I was hoping for an explanation that trainbr is buggy. Or, if that's not the case, that your developers elaborate some more on how trainbr works, especially how the best regularization is decided.
In my example code, I just cannot believe that the returned net is the best regularized one! I actually believe it's the worst (cf. my explanation).
In my example I only used the max_fail to demonstrate that other results are possible. I perfectly know this answer: https://www.mathworks.com/matlabcentral/answers/405727-why-does-the-trainbr-function-not-require-a-validation-dataset. And therefore know that max_fail should not be used with trainbr. From the linked answer: "validation is usually used as a form of regularization, but "trainbr" has its own form of validation built into the algorithm.". I believe that in my example code this "own form of validation" does not work. From epoch 2 on the algorithm increases the weigth for regularization for another 998 epoches but in the end a result with almost no regularization is returned. I believe that this decision is due to the fallback-rule of "return the best of all epochs as per MSE perfomance function" instead of trainbr's "own form of validation".
Edit: for future improvement: it would be great if trainbr could return the weigth ratio between MSE performance and regularization.

サインインしてコメントする。

製品


リリース

R2022b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by