How to get a dlgradient result for a blackbox function

24 ビュー (過去 30 日間)
Fernando
Fernando 2025 年 1 月 17 日
コメント済み: Joss Knight 2025 年 2 月 7 日 13:46
I'm working on a tomography reconstruction code using a neural representation, which I'd like to train. After some debugging I discovered that:
(1) you can't use the dlarrays as input to the radon() function;
(2) dlgradient will not work with untraced variables, therefore all variables must be dlarrays;
(3) Using extractdata() to get the input of the radon() function converted to double will fail because dlgradient will lose its trace.
So now I have a conundrum. I need to compute that gradient. dlgradient() won't do the job. What's the best alternative way to do it and have the same format as adamupdate() requires?
See below what my loss function looks like. The way it's written now it will fail at the Sinogram=radon()... line, because Intensity is a dlarray(). As I previously said, if you fix that by extracting the data, then dlgradient() fails.
Thoughts would be appreciated.
function [loss, gradients]=modelLoss(net, XY, Mask, SinogramRef, theta)
Intensity = predict(net,XY);
Intensity = reshape(Intensity, size(Mask));
Intensity(Mask)=0;
Sinogram=radon(Intensity, theta);
Sinogram=dlarray(Sinogram);
loss = sum((Sinogram(~Mask)-SinogramRef(~Mask)).^2);
loss = loss / sum(~Mask(:));
gradients = dlgradient(loss,net.Learnables);
end
Edit:
As pointed out below by Matt J (thanks!), the radon transform can be automatically converted to a matrix multiplication through the file exchange entry: https://www.mathworks.com/matlabcentral/fileexchange/44669-convert-linear-functions-to-matrix-form?s_tid=srchtitle
However, there's a few caveats that make this a very slow implementation (although it does work). For neural net training, it looks like it is not necessarily ideal, so I'll post my performance numbers as well. First, the fixed loss function for me now looks like this:
function [loss,gradients]=modelLoss(net, XY, Mask, SinogramRef, Amatrix,theta)
Intensity = predict(net,XY);
Intensity = reshape(Intensity, size(Mask));
Intensity(Mask)=0;
Sinogram=Amatrix*Intensity(:);
Sinogram=reshape(Sinogram,[],length(theta));
SinogramMask=isnan(SinogramRef);
loss = sum((Sinogram(~SinogramMask)-SinogramRef(~SinogramMask)).^2);
loss = loss / sum(~Mask(:));
gradients = dlgradient(loss,net.Learnables);
end
Where Amatrix is pre-calculated using func2mat:
Amatrix=func2mat(@(x) radonFc(x,size(occlusionSlice),theta),occlusionSlice); %forward radon transform
Amatrix=full(Amatrix);
Note the second line: full(Amatrix). That is required; because the deep learning toolbox will not work with sparse matrices. I.e., you can't do dlarray(Amatrix) if Amatrix is sparse. This is a big part of the slowdown because for the Radon transform that A matrix is really large. Also note I used the Amatrix based on the function radonFc:
function R=radonFc(X,sz,theta)
X=reshape(X,sz);
R=radon(X,theta);
end
which uses the actual Radon transform function (instead of what Matt proposed). The imrotate method can create differences in the size of the transformed image; plus the actual projection is different than simply rotating the image (there are sub-pixels used in the Radon transform and a slightly different interpolation scheme).
Now a very simple training loop would look like:
accfun = dlaccelerate(@modelLoss);
for iteration=1:maxIterations
[loss, grad]= dlfeval(accfun,net,inputData,occlusionSlice, targetData, Amatrix, theta);
[net,averageGrad,averageSqGrad] = adamupdate(net,grad,averageGrad,averageSqGrad,i);
end
However, even with dlaccelerate, this function is extremely slow. The dlfeval function takes ~5 seconds to evaluate on a high-end laptop GPU (RTX 4060) and Amatrix, for a 112x112 image, has a size of 57865x12544 of full memory and takes up ~3GB of memory (which is why my GPU is bogged down with memory transfers).
As always, with machine learning things are resource hungry. This one, however, is likely not scalable. The alternative is to spend more human time to build the projection functions manually. Between computer time and human time, I'm going to let the computer work on this one =)
  1 件のコメント
Chuguang Pan
Chuguang Pan 2025 年 1 月 18 日
The radon fucntion is not supported by the deep learning toolbox. I think if you want to include the radon fucntion in the loss function, maybe you can reproduce it with other functions supported by deep learning toolbox.

サインインしてコメントする。

採用された回答

Matt J
Matt J 2025 年 1 月 18 日
radon() can be expressed as a matrix multiplication, and matrix multiplications are of course supported by dlarrays. One option for obtaining the matrix representation is using func2mat in this FEX download,
It may take some time to compute the matrix, but you only have to do it once.
  10 件のコメント
Matt J
Matt J 2025 年 1 月 19 日
編集済み: Matt J 2025 年 1 月 19 日
As the error message says, matrix multiplication is not supported for multiplication of a sparse matrix with a single precision matrix. But that is nothing specific to the Deep Learning Toolbox and dlarrays. That is true in general:
speye(2)*ones(2,'single')
Error using *
MTIMES (*) is not supported for one sparse argument and one single argument.
Just convert to double floats for the purposes of the loss and gradient calculation.
Sinogram=Amatrix*double( Intensity(:));
...
gradients = single( dlgradient(loss,net.Learnables) );
Fernando
Fernando 2025 年 1 月 19 日
Thanks, Matt. Looks like this did the trick for CPU training. Thanks for your help =)

サインインしてコメントする。

その他の回答 (1 件)

Joss Knight
Joss Knight 2025 年 2 月 5 日 20:35
編集済み: Joss Knight 2025 年 2 月 5 日 20:37
Hi Fernando. The hardcore solution to your problem (after using Matt's solution) is to implement the sparse matrix multiply function yourself using a DifferentiableFunction. This can be used in your dlarray code:
classdef SparseMatrixVectorMultiply < deep.DifferentiableFunction
properties
SparseMatrix
end
methods
function fcn = SparseMatrixVectorMultiply(sparseMatrix)
% Constructor
fcn@deep.DifferentiableFunction(1, ...
SaveInputsForBackward=false, ...
SaveOutputsForBackward=false, ...
NumMemoryValues=0);
fcn.SparseMatrix = sparseMatrix;
end
function [Y, memory] = forward(fcn, X)
% Forward pass: Y = A * X
Y = fcn.SparseMatrix * X;
memory = [];
end
function dLdX = backward(fcn, dLdY, memory)
% Backward pass: dLdX = A' * dLdY
dLdX = fcn.SparseMatrix' * dLdY;
end
end
end
Call it inside your modelLoss function like this:
spmv = SparseMatrixVectorMultiply(AMatrix);
Sinogram = spmv(Intensity(:));
Where Amatrix is the original sparse version of the matrix.
  6 件のコメント
Joss Knight
Joss Knight 2025 年 2 月 7 日 13:41
Sorry Matt, your link didn't work for me, which comment? All I see is Fernando saying that he has to make Amatrix full for it to work with dlarray and that makes it very slow. I mean, as a developer of the toolbox I know that radon and sparse operations are not supported by dlarray so I'm not sure how it could be working out of the box, but I could be missing something.
Joss Knight
Joss Knight 2025 年 2 月 7 日 13:46
Oh no, apologies, you're right, it seems they snuck that in there without me noticing: as long as Amatrix is not a dlarray, it will behave correctly.
>> dlfeval(@(A,x)dlgradient(sum(A*x,'all'),x), sprand(10, 10, 0.4), dlarray(randn(10,1)))
ans =
10×1 dlarray
2.9293
0.9959
0.8366
1.7619
2.2032
0.8939
2.7909
2.3375
1.2663
0.9108

サインインしてコメントする。

カテゴリ

Help Center および File ExchangeImage Data Workflows についてさらに検索

製品


リリース

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by