Fastest way to compute min and max values over an array subset

19 ビュー (過去 30 日間)
Jim Hokanson
Jim Hokanson 2014 年 12 月 31 日
回答済み: Nicolás Casaballe 2016 年 5 月 26 日
I'm trying to compute the minimum and maximum values over subsets of an array. My current approach is fairly simple but is noticeably slow for the size of data that I am working with.
e.g.
data = rand(1,100);
starts = [5 10 15 20];
stops = [9 14 19 24];
mins = zeros(1,4);
maxes = zeros(1,4);
for i = 1:4
data_subset = data(starts(i):stops(i));
mins(i) = min(data_subset);
maxes(i) = max(data_subset);
end
Note that in reality data is an array that has millions of elements and I am grabbing a couple thousand subsets. The slow part seems to be creating the data_subset. I tried creating a mex function to avoid memory allocation but the whole thing was a wash, largely due to the multi-threaded nature of min and max in Matlab. I tried calling min and max from mex, but this requires creating an mxArray, which from what I can tell, requires actual data copying, not just pointer adjustment.
Any suggestions or thoughts on something I might have missed?
Thanks, Jim
  4 件のコメント
Jan
Jan 2015 年 1 月 1 日
編集済み: Jan 2015 年 1 月 1 日
I'm going to publish a hand coded MEX version in the FEX. What is the desired result, if a NaN appears in a chunk? Should it be ignored or should NaN be replied for the min and max values?
The mex is single threaded, but 10 times faster than the posted Matlab code for a chunk length of 1000, and 30 times faster for 10 elements per chunk. I assume calling it from a PARFOR loop will squeeze out even more speed.
Jim Hokanson
Jim Hokanson 2015 年 1 月 2 日
Thanks Jan. In general I think following convention and ignoring it would be best.
i.e. min([1 NaN 2]) => 1, not NaN

サインインしてコメントする。

採用された回答

Jan
Jan 2015 年 1 月 2 日
See FEX: ChunkMinMax. An automatic multi-threading is not trivial, because starting a thread for each interval might waste a lot of time, when the intervals are tiny. You can try to split the list of intervals in a PARFOR loop, but avoid splitting the job for each interval, because this will lead to collisions of the cache line. So better let the 1nd half and the 2nd of the intervals be processed in different threads.
  2 件のコメント
Jim Hokanson
Jim Hokanson 2015 年 1 月 2 日
編集済み: Matt J 2015 年 1 月 2 日
Thanks Jan!
I had written something similar but it was not nearly as well polished and it was surprisingly slower. I see numerous ways to clean things up after looking at your code. I had thought for sure that the Matlab versions of min and max were multithreaded but now I'm not so sure ....
The performance benefit seems to vary significantly with the size of the chunks. If you increase the chunk size to 1e5 I get only about a 50% benefit. However if I go to 1e6 the value improves to about 15%. Note that both of these were run on data sizes of 1e8. Reshape is actually faster in all cases, with the best case improvement in my tests being about 65% execution time relative to the mex (~75% average).
The tradeoffs I see are that with reshape, I have to be a bit clever about how I use it, but it is just Matlab code. The mex code handles uneven size chunks, but requires making sure I have the code compiled for all users and that I'm always working with doubles.
Thanks again! Jim
Jan
Jan 2015 年 1 月 2 日
You are welcome! Min and Max of Matlab are multi-threaded, when the data size exceeds a certain limit. For e.g. SUM the limit is 89000 elements.

サインインしてコメントする。

その他の回答 (4 件)

Matt J
Matt J 2015 年 1 月 1 日
編集済み: Matt J 2015 年 1 月 1 日
I want to give credit to Jan's comment which inspired this answer. However, it assumes that the subsets are all relatively small (i.e., they can all be held in RAM simultaneously when NaN-padded to the length of the largest subset).
function [mins,maxes]=doProcess(data,starts,stops)
starts=starts(:).'; stops=stops(:).'; %ensure row vectors
len=stops-starts;
maxlen=max(len);
data(end+maxlen)=0;
idx=bsxfun(@plus,starts,(0:maxlen).');
nanmap=bsxfun(@gt,idx,stops);
data_subsets=data(idx);
data_subsets(nanmap)=nan;
mins=min(data_subsets);
maxes=max(data_subsets);
  3 件のコメント
Matt J
Matt J 2015 年 1 月 1 日
編集済み: Matt J 2015 年 1 月 1 日
This however could, depending on the array size, require a pretty significant duplication of the data.
Not from the reshaping. Reshaping doesn't duplicate any data. Also, it doesn't sound like your data is that big if it only has "millions of elements". So why care about duplication anyway?
Are your subsets always intervals connected end-to-end? And are they always the same length but for the final interval (due to divisibility issues)? If so, just pad "data" with NaNs so that all intervals are of equal length. Then, reshape and be done with it!
Jim Hokanson
Jim Hokanson 2015 年 1 月 2 日
Thanks Matt for your help.
I should clarify, the data duplication comes from data(1:100), not the reshaping. From what I can tell Matlab does not try to maintain, as they say in Python, views of the original data set. If it did then data(1:100) wouldn't necessarily need to lead to data duplication either.
Point taken regarding padding with NaNs and just being done with it!
Thanks, Jim

サインインしてコメントする。


Matt J
Matt J 2014 年 12 月 31 日
編集済み: Matt J 2014 年 12 月 31 日
This tool looks like it would allow you to make data_subset share memory from "data" without actually copying it.
That would probably reduce the overhead of creating data_subset that you mention.
  2 件のコメント
Jim Hokanson
Jim Hokanson 2014 年 12 月 31 日
This looks like exactly what I need! I'll give it a try and update with how it goes.
Jim Hokanson
Jim Hokanson 2015 年 1 月 1 日
Matt,
I tried the code and it worked a couple of times then crashed when I put it into a loop (perhaps due to some JIT change that wasn't present with line by line evaluation?) Anyway, I think this approach is probably best avoided. It would be nice if Matlab had some hidden way of allowing the mxSetPr check to be avoided, since then I think my mex code would work fine. Perhaps that would just be too hard to guarantee that some later optimization wouldn't prevent things from working?

サインインしてコメントする。


Matt J
Matt J 2014 年 12 月 31 日
編集済み: Matt J 2014 年 12 月 31 日
Are all your subsets contiguous and of the same size, like in your example? If so, and if you have the Image Processing Toolbox, the imdilate(X) command will perform a local max filter over all subsets of X a fixed size (or the local min filtering of -X). You can then just grab the results of the particular subsets you want from the filter result.
If the size of the subsets varies (but not too much), you could try grouping together all subsets of a common size, and do as above in a loop over the different subset sizes.

Nicolás Casaballe
Nicolás Casaballe 2016 年 5 月 26 日
Depending on how many subsets you are going to use and the size of the data, I think it maybe worthy to spend some resources sorting the data just once, and using the sorted data to find the extrema of the subsets. The resulting code would look similar to this (not verified):
data = rand(1,100);
starts = [5 10 15 20];
stops = [9 14 19 24];
mins = zeros(1,4);
maxes = zeros(1,4);
[B,I] = sort(data); % This call would be lengthy for large datasets, but runs just once
for i = 1:4
range = starts(i):stops(i);
ind_sort = I(range); % indices of the range in the sorted data B
% Since B is already sorted, we avoid calling min and max for each subset
mins(i) = B(ind_sort(1));
maxes(i) = B(ind_sort(end));
end
I haven't tried this yet (sorry) but even if the codes needs some fixing, the key idea is to run through the data values just once, instead of calling min and max each time.

カテゴリ

Help Center および File ExchangeCreating and Concatenating Matrices についてさらに検索

タグ

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by