TreeBagger gives different results depending on 'oobvarimp' being 'on' or 'off'

4 ビュー (過去 30 日間)
Michael Schwartz
Michael Schwartz 2011 年 11 月 16 日
The turning oobvarimp option 'on' or 'off' is only supposed to change the computed measure of variable importance. It should not change the classification itself.
However, I have recently realized that it also produces a different classification. Below my code, and the resulting confusion matrix:
First, I run TreeBagger with exact same data and options, except for the oobvarimp status (on/off)
Here is the 'off' version
RandStream.setDefaultStream(RandStream('mlfg6331_64','seed',27));
model2roff = TreeBagger(400, Xr1, Y1, 'Method', 'classification', 'oobpred', 'on', 'oobvarimp', 'off', 'nprint', 100, 'MinLeaf', 1, 'prior', 'equal', 'cost', cost, 'categorical', find(iscatr));
Here is the 'on' version
RandStream.setDefaultStream(RandStream('mlfg6331_64','seed',27));
model2ron = TreeBagger(400, Xr1, Y1, 'Method', 'classification', 'oobpred', 'on', 'oobvarimp', 'on', 'nprint', 100, 'MinLeaf', 1, 'prior', 'equal', 'cost', cost, 'categorical', find(iscatr));
I then compute the confusion matrices using the following code, using first model2ron then model2roff. In theory, these should be identical. The same TreeBagger model should have been created with both 'off' and 'on' options. The only thing that should have changed is that the model should store a different measure of variable importance. This should not effect classification performance (using identical data, variables, etc...)
[pred_model2r_oobY1, pred_model2r_oobY1scores] = oobPredict(model2r);
[conf, classorder] = confusionmat(Y1, pred_model2r_oobY1,'order',classorder);
disp(dataset({conf,classorder{:}}, 'obsnames', classorder));
So, here are the results:
First, with oobvarimp 'off'
pos_outcome neg_outcome
pos_outcome 104 21
neg_outcome 23 62
Next, with oobvarimp 'on'
pos_outcome neg_outcome
pos_outcome 99 26
neg_outcome 30 55
You can see that there has been a significant change (even a small one would be problematic since the forests should be identical).
Has anyone else observed this? Does anyone (Ilya Narsky) have an explanation?

採用された回答

Ilya
Ilya 2011 年 11 月 16 日
Computing variable importance by permuting observations across every variable (that's what you get when you set oobvarimp to on) requires more runs of the random number generator. That's why the results are not identical.
oobvarimp does not change the classification in a meaningful way. What you observe are statistical fluctuations.
  1 件のコメント
Michael Schwartz
Michael Schwartz 2011 年 11 月 16 日
Hi Ilya.
First, thanks for the quick and helpful response. I appreciate the information you have given me both now and in the past.
Second, while I understand the answer, it still seems like a 5% or so fluctuation (about 10 out of 200 observations changing class) is pretty high...though I know humans are bad at having any sort of intuition about these sorts of probabilistic issues. Does this seem unusually high; especially considering the quite good classification performance?
Lastly, is there any elegant approach to assessing robustness w.r.t. statistical fluctuations? I can imagine running a brute-force approach (many simulations with different random number streams).

サインインしてコメントする。

その他の回答 (1 件)

Ilya
Ilya 2011 年 11 月 16 日
One way of assessing what is high and low under these circumstances would be to look at the classification error. It can be modeled as a binomial random variable. Here is what I get:
>> N = 104+21+23+62
N = 210
>> X1 = 23+21
X1 = 44
>> [phat,pci] = binofit(X1,N)
phat = 0.2095
pci = 0.1566 0.2709
>> X2 = 30+26
X2 = 56
>> e2 = X2/N
e2 = 0.2667
e2 is at the boundary of the 95% confidence interval.
In general, your classification result forms a 2-by-2 contingency table. Standard tools for analysis of contingency tables can be applied here. You could set up a formal test for things you might find more meaningful in your analysis.
You could take a look at the distribution of the scores returned by TreeBagger. I suspect that a good fraction of them would be close to the decision boundary (0.5). Although TreeBagger assigns them to one class or another, these are not confident classifications. I don't know how you want to apply your model for predicting on new data. Perhaps you could choose not to assign examples in the grey area (with scores near 0.5) to any class.
Instead of relying on the assumptions used for analysis of contingency tables, you could run many simulations and inspect the empirical distribution of the classification error or whatnot. You may not find anything interesting, but this wouldn't hurt.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by