Back Propagation Neural Network
6 ビュー (過去 30 日間)
古いコメントを表示
Hi.
I need a workable Back Propagation NN code. My Inputs are 100X3 dimension and outputs are 100X2 dimension.Sample size is 100.
For example 1st 5 samples are inputs [-46 -69 -82; -46 -69 -82; -46 -69 -82; -46 -69 -82; -46 -69 -82;... ] and outputs are [0 0;2 1;5 5;4 3; 3 5;...].
Please suggest me if BP is suitable for my problem and what learning technique and activation function will be better to solve this problem? Do I need to apply generalization? Kindly help me with the matlab code if possible. Thank you very much.
0 件のコメント
採用された回答
Greg Heath
2013 年 11 月 4 日
Convert to matrices and transpose
[I N ] = size(inputs)
[ O N ] = size(targets)
Use fitnet for regression and curve-fitting
help fitnet
doc fitnet
Use patternnet for classification and pattern-recognition
For examples beyond the help/doc documentation try searching with
greg fitnet
greg patternnet
in both the NEWSGROUP and ANSWERS.
0 件のコメント
その他の回答 (2 件)
FATIH GUNDOGAN
2021 年 4 月 23 日
Can you explain your code structure? how you define variables and call the function?
function Network = backpropagation(L,n,m,smse,X,D)
[P,N] = size(X);
[Pd,M] = size(D);
%%%%% INITIALIZATION PHASE %%%%%
nLayers = length(L); % we'll use the number of layers often
%%Pre-allocation of the weight matrix between each layer
w = cell(nLayers-1,1); % a weight matrix between each layer
for i=1:nLayers-2
w{i} = [1 - 2.*rand(L(i+1),L(i)+1) ; zeros(1,L(i)+1)];
end
w{end} = 1 - 2.*rand(L(end),L(end-1)+1);
% initialize stopping conditions mse = Inf; % assuming the intial weight matrices are bad
epochs = 0;
mtxmse = [];
%%%%% PREALLOCATION PHASE %%%%%
% Activation:
a = cell(nLayers,1); % one activation matrix for each layer a{1} = [X ones(P,1)];
for i=2:nLayers-1
a{i} = ones(P,L(i)+1); % inner layers include a bias node (P-by-Nodes+1)
end
a{end} = ones(P,L(end)); % no bias node at output layer
% net input at node k of the ith layer for the jth sample
net = cell(nLayers-1,1); % one net matrix for each layer exclusive input
for i=1:nLayers-2;
net{i} = ones(P,L(i+1)+1); % affix bias node end net{end} = ones(P,L(end));
end
% the sum of the weight matrix at layer i for all samples
prev_dw = cell(nLayers-1,1); sum_dw = cell(nLayers-1,1);
for i=1:nLayers-1
prev_dw{i} = zeros(size(w{i})); % prev_dw starts at 0 sum_dw{i} = zeros(size(w{i})); end %% FORWARD AND BACKWARD CALCULATION FOR EACH EPOCH
end
while mse < smse epochs >5000
% FEEDFORWARD PHASE: calculate input/output off each layer for all samples
for i=1:nLayers-1
net{i} = a{i} * w{i}'; % compute inputs to current layer
if i < nLayers-1 % inner layers
a{i+1} = [2./(1+exp(-net{i}(:,1:end-1)))-1 ones(P,1)];
else % output layers
a{i+1} = 2 ./ (1 + exp(-net{i})) - 1;
end
end
% calculate sum squared error of all samples
err = (D-a{end}); % save this for later
sse = sum(sum(err.^2)); % sum of the error for all samples, and all nodes
% BACKPROPAGATION PHASE: calculate the modified error at the output layer:
for i=nLayers-1:-1:1
sum_dw{i} = n * delta' * a{i};
if i > 1
delta = (1+a{i}) .* (1-a{i}) .* (delta*w{i});
end
end
% update the prev_w, weight matrices, epoch count and mse for i=1:nLayers-1
prev_dw{i} = (sum_dw{i} ./ P) + (m * prev_dw{i});
w{i} = w{i} + prev_dw{i};
end
epochs = epochs + 1;
mse = sse/(P*N); % mse = 1/P * 1/M * summed squared error O=a{3}; end
% Return the trained network
Network.structure = L; %Layer
Network.weights = w; %Weight
Network.epochs = epochs; %Epoch
Network.mse = mse; % Mean Square Error
Network.O = O;
Network.mtxmse = mtxmse; % Matrix of Mean Square Error
0 件のコメント
参考
カテゴリ
Help Center および File Exchange で Deep Learning Fundamentals についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!