Deep-Learning Toolbox: regression withour responses - Error using trainNetwork (line 150)
3 ビュー (過去 30 日間)
古いコメントを表示
Hello,
I am using the Deep-Learning Toolbox/
I am trying to train my network to perform a locality-sensitive representation of 16-point vectors under one real value. Each input vector of the training database has a label (vector of two real values) that represents characteristics of the 16-D vector. Basically, I want that behaviour:
- If two input vectors (16-points) have close labels, then their output representation must be close
- If their labels are very different, their output representation must be different
Since this is not really a regression, I wrote a custom output layer presented earlier. However when I want to train the network, I get the error:
>> DL_main_V2_2
Error using trainNetwork (line 150)
Invalid training data. The output size (1) of the last layer does not match the response size (2).
Error in DL_main_V2_2 (line 41)
net = trainNetwork(X,Y,layers,options);
It is easy to understand: my "responses" are indeed of size two, and the output of size one. And I don't want it to be otherwise.
Here is the code for the main file:
TcsY = [];
Y = [];
n = 0;
for i = 1:40
for j = 1:36
n = n+1;
Y(n,1) = Data.Te_List(i);
Y(n,2) = Data.ne_List(j);
for k = 1:16
TcsY(k,n) = Data.Line_intensity(k,i,j);
end
end
end
Y = Y';
X = TcsY;
layers = [
sequenceInputLayer(16,'Name','input')
fullyConnectedLayer(16,'Name','fc1')
softmaxLayer('Name','sm1')
fullyConnectedLayer(1,'Name','fc2')
localityLayer('custom_out')
];
options = trainingOptions('sgdm', ...
'MiniBatchSize',300, ...
'MaxEpochs',100, ...
'InitialLearnRate',1e-4, ...
'LearnRateSchedule','piecewise', ...
'LearnRateDropFactor',0.1, ...
'LearnRateDropPeriod',20, ...
'Shuffle','every-epoch', ...
'ValidationData',{X,Y}, ...
'ValidationFrequency',20, ...
'Plots','training-progress', ...
'Verbose',true);
net = trainNetwork(X,Y,layers,options);
I may add that the middle layers are just random stuff I'm putting here, at this stage.
And the custom output layer:
classdef localityLayer < nnet.layer.RegressionLayer
properties
% (Optional) Layer properties.
sigma = 10;
diam = 0.01;
% Layer properties go here.
end
methods
function layer = localityLayer(Name)
layer.Name = Name;
layer.Description = 'locality-sensitive output layer';
end
function loss = forwardLoss(layer, Y, T)
% Return the loss between the predictions Y and the
% training targets T.
%
% Inputs:
% layer - Output layer
% Y – Predictions made by network
% T – Training targets
%
% Output:
% loss - Loss between Y and T
L = 0;
N = size(T,2);% mini-batch size
T1 = (T(1,:)-min(T(1,:)))/(max(T(1,:))-min(T(1,:)));
T2 = (T(2,:)-min(T(2,:)))/(max(T(2,:))-min(T(2,:)));
TT = [T1;T2];
for i = 1:N
for j = 1:i
if norm(TT(:,i)-TT(:,j)) <= layer.diam
C_ij = exp(-norm(TT(:,i)-TT(:,j))/layer.sigma);
L = L + C_ij * norm(Y(i)-Y(j));
end
% else C_ij is 0 so nothing would happen anyway
end
end
end
function dLdY = backwardLoss(layer, Y, T)
% Backward propagate the derivative of the loss function.
%
% Inputs:
% layer - Output layer
% Y – Predictions made by network
% T – Training targets
%
% Output:
% dLdY - Derivative of the loss with respect to the predictions Y
dLdY = 0;
N = size(T,2);% mini-batch size
T1 = (T(1,:)-min(T(1,:)))/(max(T(1,:))-min(T(1,:)));
T2 = (T(2,:)-min(T(2,:)))/(max(T(2,:))-min(T(2,:)));
TT = [T1;T2];
for i = 1:N
for j = 1:i
if norm(TT(:,i)-TT(:,j)) <= layer.diam
C_ij = exp(-norm(TT(:,i)-TT(:,j))/layer.sigma);
dLdY = dLdY + C_ij * sign(Y(i)-Y(j));
end
% else C_ij is 0 so nothing would happen anyway
end
end
end
end
end
I have looked at the code generating the error to find a workaround, but I didn't have any idea how to do otherwise. I am feeling like the error comes from the fact that it is more designed for regression; what did I do wrong ? Maybe I shouldn't inherit from nnet.layer.RegressionLayer ?
Any idea would be welcomed.
Thank you
0 件のコメント
回答 (0 件)
参考
カテゴリ
Help Center および File Exchange で Deep Learning Toolbox についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!