Clustering - different size clusters
現在この質問をフォロー中です
- フォローしているコンテンツ フィードに更新が表示されます。
- コミュニケーション基本設定に応じて電子メールを受け取ることができます。
エラーが発生しました
ページに変更が加えられたため、アクションを完了できません。ページを再度読み込み、更新された状態を確認してください。
古いコメントを表示
I have a pretty large matrix of data which I want to cluster against the first column which can be separated into six clusters / categories of different sizes. I know the k means clustering algorithm allows input of number of clusters but allows those to be determined iteratively. Is there anything on MATLAB which would be suitable for my task?
採用された回答
Image Analyst
2015 年 10 月 29 日
Yes, silhouette() lets you graphically judge the quality of the clustering produced by kmeans(). evalclusters() lets to evaluate the quality of the clustering achieved with a range of k values so you can pick the right k if you don't know it for certain.
% Try values of k 2 through 5
clustev = evalclusters(X, 'kmeans', 'silhouette', 'KList', 2:5);
% Get the best one value for k:
kBest = clustev.OptimalK
6 件のコメント
Bran
2015 年 11 月 4 日
Thank you Image Analyst. I also wanted to ask you if you had experience of validating data that has already been clustered. I am reading lots of conflicting stuff about how this should be approached. I was hoping to produce p values for the clusters to say if they are real or not but I am not sure if this would be a sensible approach
Image Analyst
2015 年 11 月 4 日
An observation's silhouette value is a normalized (between -1 and 1) measure of how close the observation is to others in the same cluster, compared to observations in other clusters. Looking at the shape of the curves it generates can tell you how good the clusters are.
You can also use hierarchical clustering with linkage(), dendrogram(), and cluster() to see how close the various clusters are to each other.
Z = linkage(X);
dendrogram(Z);
You can divide the observations into groups, according to teh linkage distances Z:
grp = cluster(Z, 'maxclust', 6);
With the maxclust criterion, the observations are assigned to mo more than the given number of groups.
To examine the quality of the hierarchical structure, you can determine the Cophenetic correlation coefficient, which quantifies how accurately the tree represents the distances (dissimilarities) between the observations. The cophenet() function requires the linkage() distances and the pairwise distances between the points as input arguments
Y = pdist(X)
C = cophenet(Z, Y);
Values of C close to 1 indicate a high quality solution (similar to a linear correlation coefficient). I'm guessing this is what you would like.
Hi,
Thank you for the suggestions. Just wanted to note that the data has already been seperated into groups of different sizes and in some cases they have been assigned as opposed to clustered via an algorithm. As a result I was thinking maybe hypothesis testing would be appropriate. I am currently looking at the linkage values etc for my clusters. Also I was wondering, as in some cases it is unclear where there is a cluster at all even though they have been grouped together whether it would be OK to do a ttest(). For example I was considering testing to see if the values from the group are simply random are if they do indeed differ from the normally distributed data and produce a p value that way. The other method I have worked with is generating the p value via monte carlo sampling
Image Analyst
2015 年 11 月 4 日
No - I don't believe so. I'm not a Ph.D. statistician but I'm pretty sure you would not use ttest2() to create your model. The function you want to use if your scattered points are normally spaced/distributed is the fitcnb() function to create a Naive Bayes Classification. The Naive Bayes Classification was one of the first formal classification algorithms and remains on of the most popular methods. Its popularity is primarily due to the ease of constructing the classifier and largely due to its interpretable output. Naive Bayes classification models are based on Baye's rule of conditional probability. During the training step, the model estimates the parameters of a normal probability distribution, assuming the features are independent of one another within each class.
nbModel = fitcnb(xTrain, yTrain);
To estimate the class of some non-training data:
yPredicted = predict(bnModel, xTest);
To compare data with a standard probability distribution, a probability plot can be used as a simple visual check:
probplot('normal', xTrain);
If the points fall close to the line, it's normal, if not, it's not normal.
Also look up jbtest(), lillietest(), and kstest() - they all deal with testing data for normality.
Bran
2015 年 11 月 6 日
Thank you very much Image Analyst for all your help and advice. I've been looking at the various features offered by MATLAB and it is very useful. Just a final quick question, does MATLAB have a Mann-Whitney test that also accounts for clusters? For example comparing the distribution of two groups that may have several clusters within them?
Image Analyst
2015 年 11 月 6 日
This is all I could find:
p = ranksum(x,y) returns the p-value of a two-sided Wilcoxon rank sum test. ranksum tests the null hypothesis that data in x and y are samples from continuous distributions with equal medians, against the alternative that they are not. The test assumes that the two samples are independent. x and y can have different lengths. This test is equivalent to a Mann-Whitney U-test.
その他の回答 (0 件)
カテゴリ
ヘルプ センター および File Exchange で Naive Bayes についてさらに検索
参考
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!Web サイトの選択
Web サイトを選択すると、翻訳されたコンテンツにアクセスし、地域のイベントやサービスを確認できます。現在の位置情報に基づき、次のサイトの選択を推奨します:
また、以下のリストから Web サイトを選択することもできます。
最適なサイトパフォーマンスの取得方法
中国のサイト (中国語または英語) を選択することで、最適なサイトパフォーマンスが得られます。その他の国の MathWorks のサイトは、お客様の地域からのアクセスが最適化されていません。
南北アメリカ
- América Latina (Español)
- Canada (English)
- United States (English)
ヨーロッパ
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
