Remove duplicate rows in table
現在この質問をフォロー中です
- フォローしているコンテンツ フィードに更新が表示されます。
- コミュニケーション基本設定に応じて電子メールを受け取ることができます。
エラーが発生しました
ページに変更が加えられたため、アクションを完了できません。ページを再度読み込み、更新された状態を確認してください。
古いコメントを表示
Hi
I have a table with four columns and roughly 45,000 rows (example below). The first column is the name of statistical test (of which there are several hundred different tests). For every statistical test the values in the 4th column are duplicated (at .25 and 0.5). Can anyone advise how I delete the first of these rows (the first one of the .25 and the first one of the 0.5 rows) for every statistical test?
'Perm t-test equal [250ms,500ms 92, 108]: Avg: 11_right FCL' -1.349 0.185 0.492
'Perm t-test equal [250ms,500ms 92, 108]: Avg: 11_right FCL' -1.457 0.155 0.496
'Perm t-test equal [250ms,500ms 92, 108]: Avg: 11_right FCL' -1.544 0.134 0.500
'Perm t-test equal [500ms,900ms 92, 108]: Avg: 11_right FCL' -1.544 0.129 0.500
'Perm t-test equal [500ms,900ms 92, 108]: Avg: 11_right FCL' -1.615 0.112 0.503
'Perm t-test equal [500ms,900ms 92, 108]: Avg: 11_right FCL' -1.665 0.100 0.507
1 件のコメント
dpb
2021 年 1 月 20 日
I don't see the duplication in the sample dataset? (I'm presuming the 0.25 and 0.5 are confidence limits of the test and not values of the statistic as Adam presumed below).
To my eyes anyways, the above data are all for the same test for the first three and then the second set of three; but the fourth column data values are unique other than by happenstance it appears that the last @250ms is same in second and fourth columns as the first @500ms.
Not at all clear what is the result wanted from this dataset, to me, anyways...
採用された回答
Follow the demo.
- T is a table
- T.Test contains the test names which can be strings, character vectors, categoricals, or numeric.
- T.col4 is the name of column 4.
The demo removes the first line where column 4 equals 0.25 or 0.50 for each test. The tests do not have to be in order.
% Create table
rng('default') % for reproducibility
T = table(repelem({'A';'B';'C'},5,1),rand(15,1), rand(15,1), repmat([0;.25;.25;.5;.5],3,1),...
'VariableNames',{'Test','col2','col3','col4'});
T.col4([7,14]) = .33;
disp(T)
Test col2 col3 col4
_____ _______ ________ ____
{'A'} 0.81472 0.14189 0
{'A'} 0.90579 0.42176 0.25
{'A'} 0.12699 0.91574 0.25
{'A'} 0.91338 0.79221 0.5
{'A'} 0.63236 0.95949 0.5
{'B'} 0.09754 0.65574 0
{'B'} 0.2785 0.035712 0.33
{'B'} 0.54688 0.84913 0.25
{'B'} 0.95751 0.93399 0.5
{'B'} 0.96489 0.67874 0.5
{'C'} 0.15761 0.75774 0
{'C'} 0.97059 0.74313 0.25
{'C'} 0.95717 0.39223 0.25
{'C'} 0.48538 0.65548 0.33
{'C'} 0.80028 0.17119 0.5
% For each testtype, identify the first row where col4 is .25 and .50
[testID, testNames] = findgroups(T.Test);
rowNum1 = arrayfun(@(i) {find(testID==i & T.col4==0.25, 2)}, unique(testID));
rowNum2 = arrayfun(@(i) {find(testID==i & T.col4==0.50, 2)}, unique(testID));
rowNums = cell2mat(cellfun(@(c){padarray(c,[2-numel(c),0],NaN,'post')},[rowNum1', rowNum2']));
rmRows = rowNums(2, ~isnan(rowNums(2,:)));
% remove rows from table
T(rmRows, : ) = []
T = 11x4 table
Test col2 col3 col4
_____ _______ ________ ____
{'A'} 0.81472 0.14189 0
{'A'} 0.90579 0.42176 0.25
{'A'} 0.91338 0.79221 0.5
{'B'} 0.09754 0.65574 0
{'B'} 0.2785 0.035712 0.33
{'B'} 0.54688 0.84913 0.25
{'B'} 0.95751 0.93399 0.5
{'C'} 0.15761 0.75774 0
{'C'} 0.97059 0.74313 0.25
{'C'} 0.48538 0.65548 0.33
{'C'} 0.80028 0.17119 0.5
15 件のコメント
DavidL88
2021 年 1 月 20 日
Hi Adam,
Thank you. I adapted this to my dataset and it seems to erase any row with 0.25 or 0.5. Looking through values listed in rowNum1 and 2, both rows in T that match the values 0.25 and 0.5 seem to be identified and are listed.
DavidL88
2021 年 1 月 20 日
Hi Adam
The effect as you demonstrate is what I'm looking for. I'm not sure why I got a different result. There is a duplicate of all 0.25 and 0.5s. I copy a sample of the table below before running this code. The 0.25 values for this section are in rows 39 and 40 of the table T.
FCL' 0.449377841816944 0.653086728317921 0.242187500000000
FCL' 0.379117217892076 0.705573606598350 0.246093750000000
FCL' 0.411715894798510 0.683829042739315 0.250000000000000
FCL' 0.411715894798510 0.680329917520620 0.250000000000000
FCL' 0.564101287653156 0.573856535866034 0.253906250000000
FCL' 0.794131830628734 0.429142714321420 0.257812500000000
This is the same section after running the code. In rowNum1 I can see both 39 and 40 listed.
FCL' 0.449377841816944 0.653086728317921 0.242187500000000
FCL' 0.379117217892076 0.705573606598350 0.246093750000000
FCL' 0.564101287653156 0.573856535866034 0.253906250000000
FCL' 0.794131830628734 0.429142714321420 0.257812500000000
This the exact code I ran on my table T. T3 is the last column and T4 is the first column.
% For each testtype, identify the first row where col4 is .25 and .50
[testID, testNames] = findgroups(T.T4);
rowNum1 = arrayfun(@(i) {find(testID==i & T.T3==0.25, 1)}, unique(testID));
rowNum2 = arrayfun(@(i) {find(testID==i & T.T3==0.50, 1)}, unique(testID));
% remove rows from table
T([rowNum1{:}, rowNum2{:}], : ) = [];
First, please see my previous comment which I may have added while you were typing your response (I updated my solution).
Second, those trailing 0s are suspicious. Are you sure they are duplicates? For example,
0.25 == 0.2500000000000001
ans = logical
0
If floating point representation is causing problems like this, you'll need to modify these two lines of my solution
rowNum1 = arrayfun(@(i) {find(testID==i & abs(T.col4-0.25)<0.00001, 2)}, unique(testID));
rowNum2 = arrayfun(@(i) {find(testID==i & abs(T.col4-0.50)<0.00001, 2)}, unique(testID));
% ---> ---> ---> ---> ---> ---> ---> ---> ^^^^^^^^^^^^^^^^^^^^^^^
DavidL88
2021 年 1 月 20 日
Thanks. I ran that second script (adapted below). It didn't make any change to the table. The duplicate rows remained after running it this time. For rowNum1 and 2 the same vales are there (both are listed as 648x1 cell same as before with first script (with 39 and 40, the first set of duplicates, listed there too).
% For each testtype, identify the first row where col4 is .25 and .50
[testID, testNames] = findgroups(T.T4);
rowNum1 = arrayfun(@(i) {find(testID==i & T.T3==0.25, 2)}, unique(testID));
rowNum2 = arrayfun(@(i) {find(testID==i & T.T3==0.50, 2)}, unique(testID));
rowNums = cell2mat(cellfun(@(c){padarray(c,[2-numel(c),0],NaN,'post')},[rowNum1', rowNum2']));
rmRows = rowNums(2, ~isnan(rowNums(2,:)));
% remove rows from table
T(rmRows, : ) = []
It shouldn't be a floating point as those numbers represent exact time-stamps. I double-checked and confirmed this on one duplicate.
tf = isequal(T.T3(39),T.T3(40))
tf =
logical
1
> For rowNum1 and 2 the same vales are there
Impossible. rowNum1 values are based on T.T3==.25; rowNum2 values are based on T.T3==.50; It would therefore be impossible to have the same values in both variables unless the result is an empty array (no matches). Or maybe you meant that they have the same values as the previous version which would only happen if all tests had duplicates for .25 and .50.
> both are listed as 648x1 cell
It's expected that they are cell arrays with the same size.
> It shouldn't be a floating point as those numbers represent exact time-stamps
They aren't integers so it's not debatable whether they are represented by floating point or not. The question is whether their floating point representation is causing a problem with the equality tests. It doesn't matter that T.T3(39) equals T.T3(40). What matters is if those values equal 0.25 or 0.50, exactly.
Example:
4/3
ans = 1.3333
4/3-1
ans = 0.3333
(4/3- 1)*3
ans = 1.0000
(4/3- 1)*3 == 1
ans = logical
0
Could you attach a mat file containing the table?
Just tried
tf = isequal(T.T3(39),0.25)
tf =
logical
1
Given that rowNum1 and 2 extracted these cells they should be equal to .25 and .5 exactly?
I sorted rowNum1 and got back this. Should it not be 39, 246, 453, etc?
39
40
246
247
453
454
660
661
867
868
1074
1075
rmRows is coming back as []
Can I email you the table rather than uploading it here?
Adam Danz
2021 年 1 月 21 日
The mathwork contact button does not support uploads.
You could upload it and remove it after I confirm that I received it.
Or you could upload it to a cloud service (e.g. Dropbox), send the link, and then remove it from the cloud service.
Or you could create a very similar table with dummy-data and make sure that it has the same problem as your table.
DavidL88
2021 年 1 月 21 日
Hi Adam,
I figured it out. Sorry my mistake. If you look at the first example you'll see that in the first column of the duplicate rows the names are slight different (where the ms times are). I split this column after the second ms and re-ran your code, using the second new column as the test idenifier, so that the two rows are now identified as the same test. It worked perfectly and removed all the duplicate rows. Thanks for your help!
Adam Danz
2021 年 1 月 21 日
DavidL88
2021 年 1 月 23 日
Hi Adam,
Is there a way to adapt this code to remove any unique test that does not have a value <0.05 in col3? So if none of the cells below in col3 for the 'A' test have a value that is <0.05 then remove all rows for 'A'?
Test col2 col3 col4
_____ _______ ________ ____
{'A'} 0.81472 0.14189 0
{'A'} 0.90579 0.42176 0.25
{'A'} 0.91338 0.79221 0.5
{'B'} 0.09754 0.65574 0
{'B'} 0.2785 0.035712 0.33
{'B'} 0.54688 0.84913 0.25
{'B'} 0.95751 0.93399 0.5
{'C'} 0.15761 0.75774 0
{'C'} 0.97059 0.74313 0.25
{'C'} 0.48538 0.65548 0.33
Adam Danz
2021 年 1 月 23 日
Yep, change "T.col4==0.25" to T.col4<0.05 and then test for empty cells using cellfun('isempty,___). It will return a logical column vector indicating which test groups do not contain any col4 values less than 0.05.
DavidL88
2021 年 1 月 23 日
Thank you. I tried this.
rowNum3 = arrayfun(@(i) {find(testID==i & TableMain.T2<0.50)}, unique(testID));
rowNums4 = cell2mat(cellfun(@isempty,rowNum3))
% remove rows from table
TableMain(rmRows4, : ) = []
I assume this is the right structure. rowNum3 finds those Tests with a value < 0.05 in T2. rowNums4 identifies those that came back negative and I can use this vector to remove those rows.
I'm getting the below message. I thought this might be due to the dimensions being different in that the previous code expected two matches/duplicates but this code is open to a variable number of matches? I tried removing ", 2" after <0.05 in the first line but same response.
Matrix dimensions must agree.
Error in Untitled4>@(i){find(testID==i&TableMain.T2<0.50)} (line 6)
rowNum3 = arrayfun(@(i) {find(testID==i & TableMain.T2<0.50)}, unique(testID));
Error in Untitled4 (line 6)
rowNum3 = arrayfun(@(i) {find(testID==i & TableMain.T2<0.50)}, unique(testID));
Adam Danz
2021 年 1 月 23 日
You were close...
idx below returns a logical vector the same size as testNames indicating which test-names are flagged. Then you have to identify which rows of the table have those test names.
rowNum3 = arrayfun(@(i) {find(testID==i & T.col4<0.50)}, unique(testID));
idx = cellfun(@isempty,rowNum3);
rmIdx = ismember(T.Test, testNames(idx));
T(rmIdx,:) = []
DavidL88
2021 年 1 月 28 日
This worked thanks!
その他の回答 (0 件)
カテゴリ
ヘルプ センター および File Exchange で Data Import and Analysis についてさらに検索
タグ
参考
Web サイトの選択
Web サイトを選択すると、翻訳されたコンテンツにアクセスし、地域のイベントやサービスを確認できます。現在の位置情報に基づき、次のサイトの選択を推奨します:
また、以下のリストから Web サイトを選択することもできます。
最適なサイトパフォーマンスの取得方法
中国のサイト (中国語または英語) を選択することで、最適なサイトパフォーマンスが得られます。その他の国の MathWorks のサイトは、お客様の地域からのアクセスが最適化されていません。
南北アメリカ
- América Latina (Español)
- Canada (English)
- United States (English)
ヨーロッパ
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
