Why is the final ValidationLoss of my model always worse than the best value it obtained during training?

5 ビュー (過去 30 日間)
Hello!
This is the very first time I post a question on the forum: I hope it fits the requirements for a good question, and I'll be open to any feedback on its format.
So, I'm training a Deep Learning model. My network is essentially a 3D rendition of the Inceptionresnetv2 with a custom MAE regression output layer that I copied from this page of the Matlab docs, with some minor adjustments to the size values to make it function with 3D data.
During training, the network reaches a certain ValidationLoss, and I've set the option 'OutputNetwork' to 'best-validation-loss' in order to get the best model as the final output of the trainNetwork function.
Once training is complete, I verify this value by using the outputed net to predict responses for the exact same Validation Set that was used during the training phase, and I always get a worse result. Say the best ValidationLoss during training was around 3.8, the value of the MAE after using the outputed network to predict responses on the Validation Set will be around 4.2 instead.
Is there a specific reason for this? The network is supposed to be the exact same that had obtained the best ValidationLoss (which in turn should be the best MAE, due to the custom regression layer), and the data is definitely the exact same Validation Set that was used during training, so I can't understand why the performance differs so greatly.

回答 (1 件)

Gagan Agarwal
Gagan Agarwal 2023 年 12 月 21 日
Hi Alfredo,
I understand that you are trying to know the reasons behind a significant performance difference between two methods.
Here are a few potential reasons and considerations:
  1. Overfitting - If the network is overfitting to the training data, the validation loss might appear better during training due to the network beginning to memorize the validation set, especially if the validation set is small or not well-randomized.
  2. Evaluation During Training: Sometimes, the model is evaluated on the validation set while it's still in training mode, which could lead to optimistic validation loss due to factors like dropout still being active.
  3. Randomness and State Reset: Deep learning frameworks often involve randomness. If the state of the random number generator changes between the training phase and the evaluation phase, results can differ.
I hope it helps!

カテゴリ

Help Center および File ExchangeClassification Learner App についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by