Classification Learner App vs. Training and testing a model programmatically, Is there any hidden magical step in the classification learner app?
1 回表示 (過去 30 日間)
古いコメントを表示
I am trying to find a good model to explain my dataset. The problem is that I want to do leave-one-person-out cross validation which is not available in the App. So I trained different models (e.g. Tree, SVM, KNN, LDA) using functions like fitctee, fitcsvm, fitcknn, and fitcdiscr. Following the leave-one-person-out procedure I have found average classification accuracy of about 70% for the best model. However, when I use the App to model the data using 10-Fold cross validation, it has much better accuracy and TPR and TNR about 98%. This is really confusing that why this is happening! I was wondering if there are some steps I am missing when I do the modeling programmatically. Or is there any way to do what the App does by writing scripts and probably customizing the cross validation scheme to leave-one-person-out?
0 件のコメント
回答 (1 件)
Stephan
2018 年 7 月 16 日
編集済み: Stephan
2018 年 7 月 16 日
Hi,
A possible way to do this is working with the app and then, when you got a good result, export the code to matlab. This allows you to see the magic steps that are made and modify the code you got, if needed.
I could imagine that this procedure will solve your problem.
Best regards
Stephan
6 件のコメント
参考
カテゴリ
Help Center および File Exchange で Discriminant Analysis についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!