Speed up 'dlgradient' with parallelism?
4 ビュー (過去 30 日間)
古いコメントを表示
Hi all,
I am wondering if there is a way to speed up the 'dlgradient' function evaluation using parallelism or GPUs.
0 件のコメント
回答 (1 件)
Jon Cherrie
2021 年 4 月 12 日
You can use a GPU for the dlgradient computation by using a gpuArray with dlarray.
In this example, the minibtachqueue, puts data on to the GPU and thus the GPU is used for the rest of the computation, both the "forward" pass the "backward" (gradient) pass:
2 件のコメント
Luis Hernandez
2023 年 11 月 14 日
Hello.
I've been trying to use the functions 'dlgradient' and 'dlfeval' with gpuArray inputs so that matlab will use my GPU. Unofrtunately, they only work when I pass dlarray inputs.
What is the workaround for this? what is the minibatch doing that allows you to work with gpuArray?
Thanks!
-L
Enrico
2025 年 2 月 18 日
編集済み: Enrico
2025 年 2 月 18 日
I have a similar problem, I have a 169x72x21 output and I would like to compute the dlgradient with respect to one input and pointwise (so not with the sum(y,'all') trick). I have tried defining a function
function dT_dt = derive_output(y,t)
dT_dt = dlgradient(y,t);
end
and then to run
dT_dt = arrayfun(@derive_output,y,repmat(t,1,num_PL));
where y is the output of my nn and t the scalar input value (replicated num_PL times so that the inputs to derive_output have the same size)
I get this error:
Error using gpuArray/arrayfun
Unable to read file 'dlarray'.
参考
カテゴリ
Help Center および File Exchange で Custom Training Loops についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!