Jacobian matrix of neural network
6 ビュー (過去 30 日間)
古いコメントを表示
what is inside of jacobian matrix ?I know that for a trained network with number of data :1,2,..., n is equall to the number of column in Jacobian matrix . what is rows?
0 件のコメント
採用された回答
Cam Salzberger
2016 年 2 月 29 日
Hello Rita,
The number of rows in the Jacobian output by "defaultderiv" is the sum of the number of weights and biases for the network. For example, if you do this to create the network:
[x,t] = simplefit_dataset;
net = feedforwardnet(10);
net = train(net,x,t);
y = net(x);
perf = perform(net,t,y);
dwb = defaultderiv('de_dwb',net,x,t);
Now "dwb" is the Jacobian of errors with respect to the net's weights and biases. It is a 31x94 matrix. If you check out the following properties in the network:
net.IW % Input weight matrices
net.LW % Layer weight matrices
net.b % Bias vectors
you can see that "net.IW" contains a 10x1 matrix, "net.LW" contains a 1x10 matrix, and "net.b" contains a 10-element vector and a 1-element vector. The number of elements adds up to 31.
I hope this helps clarify the Jacobian.
-Cam
1 件のコメント
MAHSA YOUSEFI
2022 年 2 月 5 日
編集済み: MAHSA YOUSEFI
2022 年 2 月 5 日
Hi Cam.
I am following the answer of this question regarding Hessian in a deep network. Now, I see this answer. However, I am asking you a different way for computing Hessian, if there is.
I am using a training loop for my simple model in which gradients are computing by dlgradient. As you know, dlgradient (through dlfeval) returns a TABLE in which the layers, parameters (weights and bias) and gradients' values are stored. Also, we know that dlgradient accepts "loss" as a SCALLER and dlnet.Learnables, data samples dlX and targets dlY for these computations. I am interested in computing Hesseian for a small network using dlX and dlY. In fact I am going to compute a sub-sampled Hessian if I uses mini-batch dlX. (SO, I do not have problem for storing this matrix then!). May I ask you please let me know how it would be possible? (I put this question on Community titled "Computing Hessian by dllgradient" as well. Thanks...
その他の回答 (2 件)
Greg Heath
2016 年 2 月 27 日
The number of input variables
Hope this helps.
Thank you for formally accepting my answer
Greg
Monsij Biswal
2019 年 6 月 19 日
In which order are the derievatives present ? I am unable to figure it out what is the exact order columnwise. Is it layerwise starting from the first layer and then weights->biases for each layer or something else ?
0 件のコメント
参考
カテゴリ
Help Center および File Exchange で Sequence and Numeric Feature Data Workflows についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!