Converting parfor operations to gpuArray

1 回表示 (過去 30 日間)
nah
nah 2013 年 8 月 5 日
I have a working parallel version of a code that does some likelihood calculations on a reasonably large matrix in parallel (using parfar) It is a trivially parallel operation as the calculation is performed column-wise & the parfor is employed to operate on the columns of data (one worker per column)
How could I achieve the same thing using a GPU (since the matrix is quite big & I have limited number of workers). All the operations are all GPU supported functions (matrix algebra ones like eig, diag & matrix multiplications only )
ie.,
data = 1000 by 200 (1000 rows by 200 cols matrix)
[nrows, ncols] = size(data);
parfor ix = 1:ncols
workerData = data(:,ix);
likelihood(ix) = funcCalcLikelihood(workerData, params);
end
This is fast enough. But i need to repeat such calculations many times so as to do a parameter sweep, so any speed increment will be good. Also, since my dataset is getting bigger (ncols = 1500 & I only have 144 max workers)
I have 2 Tesla (c2050) GPUS and was wondering if I could convert this into a gpuArray operation.
Thanks for your inputs.
  3 件のコメント
nah
nah 2013 年 8 月 19 日
Thanks +Edric Ellis or your comment. I didn't quite get what you mean by converting put data though. Calling gpuArray automatically slices the big matrix by columns you mean ?
nah
nah 2013 年 9 月 6 日
Any updates on this, anyone ?

サインインしてコメントする。

回答 (0 件)

カテゴリ

Help Center および File ExchangeGPU Computing in MATLAB についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by