Strange performance of MATLAB cuda on matrixes. Any idea?

1 回表示 (過去 30 日間)
ehsan monfared
ehsan monfared 2014 年 12 月 10 日
コメント済み: ehsan monfared 2014 年 12 月 12 日
I have recently employed MATLAB CUDA library for some absolutely simple matrix calculations on gpu. But the performance results are very strange. could any body help me understand what exactly is going on and how I can solve the issue. Thanks in advance. Please note that the following codes are run on geforce GTX TITAN black gpu.
assume a0,a1,...a6 be 1000*1000 gpuarrays and U=0.5 and V=0.0
titan = gpuDevice(); tic();
for i=1:10000 a6(1,1)=(0.5.*(a5(1,1)-a0(1,1)))-(a1(1,1)+a2(1,1)+a3(1,1))-(a5(1,1).*U./3.0)-(a5(1,1).*V./2.0)+(0.25.*a5(1,1).*a4(1,1)); end
wait(titan); time = toc() the result for time=17.98 seconds
now re-defining a0,a1,...a6 and U and V for employment on cpu and calculating the time needed:
tic();
for i=1:10000 a6(1,1)=(0.5.*(a5(1,1)-a0(1,1)))-(a1(1,1)+a2(1,1)+a3(1,1))-(a5(1,1).*U./3.0)-(a5(1,1).*V./2.0)+(0.25.*a5(1,1).*a4(1,1)); end
time= toc() the result for time=0.0098 seconds
therefore more than 1800 times faster on cpu!!!!
then I decided to do the previous calculations on the whole matrix rather than specific elements, and here are the results:
Results for the run on gpu:
titan = gpuDevice(); tic(); for i=1:10000 a6=(0.5.*(a5-a0))-(a1+a2+a3)-(a5.*U./3.0)-(a5.*V./2.0)+(0.25.*a5.*a4); end wait(titan); time = toc() the result for time=6.32 seconds which means that the operation on the whole matrix is much faster than on a specific element!
Results for the run on CPU:
tic(); for i=1:10000 a6=(0.5.*(a5-a0))-(a1+a2+a3)-(a5.*U./3.0)-(a5.*V./2.0)+(0.25.*a5.*a4); end
time= toc() the result for time=35.2 seconds
AND HERE IS THE MOST SURPRISING RESULT: assuming a0,a1,...a6 and U and V to be just 1*1 gpuarrays and running the following:
titan = gpuDevice(); tic(); for i=1:10000 a6=(0.5.*(a5-a0))-(a1+a2+a3)-(a5.*U./3.0)-(a5.*V./2.0)+(0.25.*a5.*a4); end wait(titan); time = toc() the result for time=7.8 seconds
it is even slower than the corresponding 1000*1000 case!
Unfortunately the line a6(1,1)=(0.5.*(a5(1,1)-a0(1,1)))-(a1(1,1)+a2(1,1)+a3(1,1))-(a5(1,1).*U./3.0)-(a5(1,1).*V./2.0)+(0.25.*a5(1,1).*a4(1,1)); is one of the lines among about 100 lines, all in a single for-loop and this line proved itself as a real bottleneck taking about 50% of all calculation time needed! could anybody help me? note that transferring this part of calculations on cpu is not a choice because the bottleneck line is in a for-loop and sending a1,...a6 to cpu and calling the results to gpu in each iteration is much more time consuming. any advice is really really appreciated.
  4 件のコメント
Joss Knight
Joss Knight 2014 年 12 月 12 日
1. Can you do the calculation on the whole matrix (without any indexing) and then just index the result, i.e.
temp = (0.5.*(RHO-FbarIN0))- (FbarIN2+FbarIN1+0.5*FbarIN0)-(RHO.*Uwnorth./3.0)+(RHO.*Vwnorth./2.0)- (0.25.*RHO.*BUOYANCE);
FbarIN2(1,NJ) = temp(1,NJ);
...and then let me know whether you got a worthwhile speedup? If that's looking better, we can try using masks to see if that's faster, i.e.
mask = false(size(FbarIN2));
mask(1,NJ) = true;
...
FbarIN2(mask) = temp(mask);
It might not be any faster though.
2. Try storing the values of RHO(1,NJ), FbarIN0(1,NH) etc up front so the matrices don't have to be indexed multiple times.
3. Can you compile the equation into an arrayfun function and see what speedup that gives, both when you pass scalars and when you pass the whole matrix?
4. Show me what you're doing to gather the data back to the CPU and run the scalar computation there. There may be a way to do the gather that is more efficient.
ehsan monfared
ehsan monfared 2014 年 12 月 12 日
Joss, I am off-campus right now. I will test all the suggestions and let you know the results in a week. Once again I have to thank you for the great help.

サインインしてコメントする。

回答 (1 件)

matt dash
matt dash 2014 年 12 月 10 日
編集済み: matt dash 2014 年 12 月 10 日
Calculating timings of GPU functions is very tricky business. You should read all about gpu occupancy and block sizes and all that good stuff. The short story is that more data does not always equal longer computation times.
Also, if you are really concerned with performance, you should write your calculations in a .cu file, compile it to a ptx, and call that from Matlab instead of relying on Matlab equations. Read/implement the demo described here to see how much of a difference this makes: Mandelbrot Set Demo

カテゴリ

Help Center および File ExchangeGPU Computing についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by