nnstart Neural Net toolbox and validation ROC

1 回表示 (過去 30 日間)
AD
AD 2018 年 5 月 11 日
編集済み: AD 2018 年 5 月 11 日
Hi,
I have been training neural networks for classification with nnstart. I get perfect training restults 0% false positives (FP) and 100% true positives (TP). The testing set performs quite less well but still acceptable, usually i can get up to 60% TP for 40% FP (and sometimes 80% TP). However, the Validation set ROC is usually very bad: i.e., random or worse. Can someone help me understand what it means ? What does it mean for the validation ROC to be bad and for the training ROC to be perfect ?
P.S. I use nn classification with usually about 100-200 samples split by default with 70% training set, 15% validation, 15% test. And default params, scaled conjugate gradient back-propagation & cross entropy minimization, 1000 hidden layers with sigmoid activation and last layer with softmax squashing function.

回答 (0 件)

カテゴリ

Help Center および File ExchangeDeep Learning Toolbox についてさらに検索

製品

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by