Using GPU with built-in machine learning functons
2 ビュー (過去 30 日間)
古いコメントを表示
I wanted to check the speed-up of my classification task when using the GPU instead of the CPU. More specifically I tried something like:
tic
NB=fitcnb(gpuArray(train_values), train_labels,'KFold',5,'CrossVal','on');
kfoldLoss(NB)
time_upNB=toc;
but I am getting a "Conversion to double from gpuArray is not possible." error. Is my syntax wrong or is this not possible?
3 件のコメント
Astarag Chattopadhyay
2018 年 6 月 4 日
Hi Tasos,
It is hard to tell from which part of the code you are getting this error. In general, this error occurs when you try to save any gpuArray calculation result to a variable. As an example
A = magic(5);
Agpu = gpuArray(A);
B = zeros(5);
for i = 1:5
B(i,i) = Agpu(i,i) * Agpu(i,i);
end
This code snippet will throw the same error as you are getting. You need to preallocate B as gpuArray to workaround this:
iA = magic(5);
Agpu = gpuArray(A);
Bgpu = gpuArray(zeros(5));
for i = 1:5
Bgpu(i,i) = Agpu(i,i) * Agpu(i,i);
end
I will suggest you put breakpoints in your code to see where you are getting the error and most probably you need do a gpuArray conversion at that line of code.
回答 (0 件)
参考
カテゴリ
Help Center および File Exchange で GPU Computing についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!