# How to implement PyTorch's Linear layer in Matlab?

20 ビュー (過去 30 日間)
John Smith 2023 年 2 月 11 日

Hello,
How can I implement PyTorch's Linear layer in Matlab?
The problem is that Linear does not flatten its inputs whereas Matlab's fullyConnectedLayer does, so the two are not equivalent.
Thx,
J

サインインしてコメントする。

### 回答 (4 件)

Matt J 2023 年 2 月 11 日

One possibility might be to express the linear layer as a cascade of fullyConnectedLayer followed by a functionLayer. The functionLayer can reshape the flattened input back to the form you want,
layer = functionLayer(@(X)reshape(X,[h,w,c]));
##### 9 件のコメント表示 8 件の古いコメント非表示 8 件の古いコメント
John Smith 2023 年 2 月 13 日
Got it. This looks nice.

サインインしてコメントする。

John Smith 2023 年 2 月 13 日

It is possible to perform matrix multiplication using convolution as described in "Fast algorithms for matrix multiplication using pseudo-number-theoretic transforms" (behind paywall):
1. Converting the matrix A to a sequence 2. Converting the matrix B to a sparse sequence 3. Performing 1d convolution between the two sequences to obtain sequence 4. Extracting matrix C entries from Unfortunately, the paper provides only equations for the square matrix case . I worked out the general case . The critical equations are:
For -sequence: , , polynome degree , For -sequence: , , polynome degree , For -sequence: , , polynome degree , Also, need to pay attention to the fact that Matlab requires polynome's coefficients in descending order.
This is how you do convolution with dlconv s.t. it matches conv:
a=(1:4)';
b=(5:10)';
dla=dlarray(a,'SCB');
weights=flipud(b); % dlconv uses reverse order of conv's weights
% filterSize-by-numChannels-by-numFilters array, where
% filterSize is the size of the 1-D filters,
% numChannels is the number of channels of the input data,
% and numFilters is the number of filters.
bias = 0;
c = extractdata(dlc);
assert(all(abs(c-conv(a,b)) < 1e-14),'conv is different from dlconv');
Hope this helps.
##### 0 件のコメント表示 -1 件の古いコメント非表示 -1 件の古いコメント

サインインしてコメントする。

Matt J 2023 年 2 月 13 日

Another possible way to interpret your question is that you are trying to apply pagemtimes to the input X with a non-learnable matrix A, where the different channels of X are the pages. That can also be done with a functionLayer, as illustrated below both with normal arrays and with dlarrays,
A=rand(4,3); %non-learnable matrix A
xdata=rand(3,3,2); %input layer data with 2 channels
multLayer=functionLayer(@(X) dlarray( pagemtimes(A,stripdims(X)) ,dims(X)) );
X=dlarray(xdata,'SSC');
Y=multLayer.predict(X)
Y =
4(S) × 3(S) × 2(C) dlarray (:,:,1) = 0.8480 0.9729 0.9338 1.1000 1.6463 1.5592 0.9130 1.2452 1.1881 1.1243 1.3362 1.2971 (:,:,2) = 0.8228 0.5187 1.1387 1.1783 0.7549 1.5675 0.9390 0.5816 1.2925 1.0862 0.6101 1.6128
%%Verify agreement with normal pagemtimes
ydata=pagemtimes(A,xdata)
ydata =
ydata(:,:,1) = 0.8480 0.9729 0.9338 1.1000 1.6463 1.5592 0.9130 1.2452 1.1881 1.1243 1.3362 1.2971 ydata(:,:,2) = 0.8228 0.5187 1.1387 1.1783 0.7549 1.5675 0.9390 0.5816 1.2925 1.0862 0.6101 1.6128
##### 3 件のコメント表示 2 件の古いコメント非表示 2 件の古いコメント
John Smith 2023 年 2 月 14 日

Very nice! Just need to add the batch dimension.
I'd suggest to put this in a separate answer s.t. I can accept it.
PS Too bad it's not available in Matlab as a built-in.

サインインしてコメントする。

Matt J 2023 年 2 月 14 日

Another approach is to write your own custom layer for channel-wise matrix multiplication. I have attached a possible version of this,
X=rand(3,3,2);
L=pagemtimesLayer(4); %Custom layer - premultiplies channels by 4-row learnable matrix A
L=initialize(L, X);
Ypred=L.predict(X)
Ypred =
4(S) × 3(S) × 2(C) dlarray (:,:,1) = 0.6102 0.3216 0.8590 0.8080 0.5732 1.3988 0.2763 0.1120 0.2556 0.5463 0.8053 1.2450 (:,:,2) = 0.6860 0.9692 0.6784 1.1580 1.5767 1.1105 0.1999 0.2773 0.1199 1.1686 1.3306 0.5205
Ycheck=pagemtimes(L.A,X) %Check agreement with a direct call to pagemtimes()
Ycheck =
Ycheck(:,:,1) = 0.6102 0.3216 0.8590 0.8080 0.5732 1.3988 0.2763 0.1120 0.2556 0.5463 0.8053 1.2450 Ycheck(:,:,2) = 0.6860 0.9692 0.6784 1.1580 1.5767 1.1105 0.1999 0.2773 0.1199 1.1686 1.3306 0.5205
##### 8 件のコメント表示 7 件の古いコメント非表示 7 件の古いコメント
Matt J 2023 年 2 月 15 日
That sounds right.
Although, part of me questions whether it was the best design for TMW to make the the user responsible for summing over batched input in the backward() method, since that dimension should always be handled the same way.

サインインしてコメントする。

### カテゴリ

Find more on Image Data Workflows in Help Center and File Exchange

R2022b

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!