Why is my test accuracy higher than validation accuracy?

14 ビュー (過去 30 日間)
Maryam
Maryam 2024 年 9 月 20 日
編集済み: John D'Errico 2024 年 9 月 20 日
I used the classification learner app in Matlab. My model has a validation accuracy of 60.6% and a test accuracy of 72.0%. I know that the test set could be a ''lucky'' better set, but could there also be other reasons for this big difference?

採用された回答

John D'Errico
John D'Errico 2024 年 9 月 20 日
編集済み: John D'Errico 2024 年 9 月 20 日
Not really. It might be a reflection that you needed more data, that your sets are just not large enough. The law of large numbers requires larger sets of data for expected behavior to prevail.
I might also add there is a lot of confusion about these terms. I prefer "training" accuracy to describe the statistics on the set used to train the model. Then my preference is to call the secondary tests "validation", to learn how well the model fits to other data, not used in the training step. But that need not be standard. The use of training though does make the difference explicit, at least in my eyes.
Typically, the training accuracy would be a little better than the validation accuracy, because no matter what, there will always be some component of overfitting. This is unavoidable. So if the numbers go the other way, then luck and random chance played a part.
As for the difference being a big one, again, that may well be a function of the quantity of data available.

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeAnalysis of Variance and Covariance についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by