cvloss or kfoldloss for regression tree?
1 回表示 (過去 30 日間)
古いコメントを表示
Hi,
I'm a bit confused with 'cvloss' and 'kfoldLoss'.
- kfoldLoss
Syntax: L = kfoldLoss(cvmodel) returns the cross-validation loss of cvmodel.
load carsmall
>> XX = [Displacement Horsepower Weight];
>> YY = MPG;
>> cvmodel = fitrtree(XX,YY,'crossval','on');
>> L = kfoldLoss(cvmodel,'mode','average')
L =
30.3578
Default: 'mse', mean square root.
2. cvloss
Syntax: E = cvloss(tree) returns the cross-validated regression error (loss) for a regression tree.
>> load carsmall
>> X = [Displacement Horsepower Weight];
>> Mdl = fitrtree(X,MPG);
>> rng(1);
>> E = cvloss(Mdl)
E =
25.7383
First, both cases used same predictors and same response, why there is a difference between L and E outcomes?
Second, function 'fitrtree' by default 'crossval' is turned 'off'. In the 'cvloss' example, noticed that 'Mdl = fitrtree(X,MPG);' didn't turn 'crossval' on, how does it have anything to do with cross-validated regression? It is not even turned on.
Third, how are both kfoldLoss and cvloss calculated? looks that they both use MSE but giving completely different results.
0 件のコメント
回答 (1 件)
Jeremy Brecevic
2020 年 11 月 27 日
Unlike cvloss, kfoldLoss does not return SE,Nleaf, or BestLevel. kfoldLoss also does not allow you to examine any error other than the classification error.
0 件のコメント
参考
カテゴリ
Help Center および File Exchange で Gaussian Process Regression についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!