Info
この質問は閉じられています。 編集または回答するには再度開いてください。
What percentage of my targetdata should be 1 and What percentage should be 0?
1 回表示 (過去 30 日間)
古いコメントを表示
hi everybody I am beginner in I want to use svm for classification of my data.suppose that Train data are like below:
that X1 and X2 are inputs of my network(X1 and X2 are features that we extracted ) and Y is output of my network. now I have a question if I have 15700 samples for training my network how many of them should have 1 label and how many should have 0 label(my network is 2 classes). should I have any appropriateness between labels of my classes ? What percentage of my targetdata should be 1 and What percentage should be 0? if 800 of my labels are 1 and 14900 are 0, will my network work right? thanks
0 件のコメント
回答 (1 件)
Martin Brown
2015 年 6 月 29 日
It partially depends on whether the data / distributions are separable or overlapping.
Assuming the data is separable (it probably isn't), the numbers don't matter too much as long as you have exemplars (support vectors) which lie close to the margin boundary and hence determine the decision boundary. Generally the more data you train with the better as you'll have a richer pot of potential support vectors and the relative numbers don't matter.
If the data is not separable, the numbers should typically reflect the prior class probabilities, i.e. how the examples are drawn from the real world. You give an example where about 6% are class 0 and 94% are class 1. If this reflects the fact that class 0 examples are much rarer in real life than class 1, then this is appropriate. However, if the classes are very overlapping (based on your choice of features), it may be that the classifier would just learn to say class 1 all the time as that would be right 94% of the time, but it would not be predictive in any sense. So if you have imbalanced class distributions as you seem to suggest, make sure that the features have enough discriminatory power to predict the rare class in some cases.
3 件のコメント
Martin Brown
2015 年 7 月 1 日
I don't fully understand your comment/question but if you remove data according to the proportions that they occur in the data set (their prior class probabilities assuming the data has been collected in an unbiased sense), then you're simply sub sampling the data.
If you're deleting rows not in proportion to their prior probabilities, you'd be producing a biased classifier (strictly speaking an SVM doesn't produce an "easy" probabilistic classifier, but it is similar in some senses). By removing data, you'd be assigning a higher weighting to one type of errors. This may be correct in some cases (medical diagnosis, fraud detection), but you should be prepared to justify these weightings. Something like
has a decent description of this.
この質問は閉じられています。
参考
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!