Linear Neural Network add another linear layer

1 回表示 (過去 30 日間)
Triple G
Triple G 2020 年 6 月 10 日
回答済み: Ayush Anand 2024 年 6 月 11 日
I have written a program for training a linear neural network for classification. Both the input and the output are 3 row vectors and I need a each node to have a linear transfer function. I have used the linearlayer type as you can see in the code below;
function trainLNN(P, T)
%P = [1.7300 1.9500 2.3800; 1.4400 5.0000 4.7500; 3.4000 3.2000 3.3000];
%T = [ 0 1 0; 1 0 0; 0 0 1];
net = linearlayer;
net.trainParam.epochs = 1000;
net.trainParam.goal = 0.1;
net.trainParam.lr = 0.001;
net.trainParam.lr = maxlinlr(P,'bias');
net = train(net,P, T);
What I get is a neural network with just an input and an output layer (see the photo below).
How can I extend that to add a hidden layer between the input and the output layers to have a network like the one in the photo with all nodes having a linear transfer function? I have tried net = linearlayer(P,T, 3, {'purelin', 'purelin'}); but I get an error ("Too many parameter arguments");

回答 (1 件)

Ayush Anand
Ayush Anand 2024 年 6 月 11 日
You can add a hidden layer between input and output using "feedforwardnet" function (https://www.mathworks.com/help/releases/R2020a/deeplearning/ref/feedforwardnet.html?s_tid=doc_ta) instead of initializing net with "linearlayer". This creates one hidden layer of neurons with the number of neurons you specify. You can then set the transfer function at each of the nodes to be linear by specifying "purelin" as the value of the attribute "transferFcn" (https://www.mathworks.com/help/releases/R2020a/deeplearning/ref/purelin.html?s_tid=doc_ta):
function trainLNN(P, T)
% Example Input:
% P = [1.7300 1.9500 2.3800; 1.4400 5.0000 4.7500; 3.4000 3.2000 3.3000];
% T = [0 1 0; 1 0 0; 0 0 1];
% Define the size of the hidden layer
hiddenLayerSize = 3;
% Create a feedforward network with one hidden layer
net = feedforwardnet(hiddenLayerSize);
% Set all layers' transfer functions to be linear
% For hidden layers
for i=1:(length(net.layers)-1)
net.layers{i}.transferFcn = 'purelin';
end
% For the output layer
net.layers{end}.transferFcn = 'purelin';
%
% net.trainParam.epochs = 1000;
% net.trainParam.goal = 0.1;
% net.trainParam.lr = maxlinlr(P, 'bias');
%
% net = train(net, P, T);
end

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by