Why is there a difference between output of neural network by inbuilt test function (ANN Toolbox) and custom designed test function? (Test Function: One that checks accuracy of network after training)

1 回表示 (過去 30 日間)
  • Here is the code for the network. *
%%%%%%%%%%%%%%%%%%%%%%%Code %%%%%%%%%%%%%%%%%%%%%%%%
% train_input -- 224 * 320 matrix containing 80 samples each with 224 features
% test_input -- 224 * 80 matrix containing 80 samples each with 224 features
% train_target -- 40 * 320 containing 320 samples
% test_target -- 40 * 80 containing 80 samples
setdemorandstream(491218382);
net = patternnet(44);
net.performFcn = 'mse';
net.trainFcn = 'trainscg';
net.layers{1}.transferFcn = 'tansig';
net.layers{2}.transferFcn = 'tansig';
net.divideParam.trainRatio = 1.0; % training set [%]
net.divideParam.valRatio = 0.0; % validation set [%]
net.divideParam.testRatio = 0.0; % test set [%]
net.trainParam.epochs = 300;
net.trainParam.showWindow = 0;
[net,tr] = train(net,train_input,train_target)
%%%%%%%%%%%%%%%%%%%%Inbuilt Testing of the Network %%%%%%%%%%%%%%%%%%%%%%%%
testY = net(test_input);
[c,cm] = confusion(test_target,testY);
fprintf('Percentage Correct Classification : %f%%\n', 100*(1-c));
%%%%%%Output : Percentage Correct Classification : 95 % %%%%%%
%%%%%%%%%%%%%%%%%%%Custom Designed Testing of the Network %%%%%%%%%%%%%%%%%%%
wb = formwb(net,net.b,net.iw,net.lw);
[b,iw,lw] = separatewb(net,wb);
weight_input = iw{1,1};
weight_hidden = lw{2,1};
bias_input = b{1,1};
bias_hidden = b{2,1};
test_input = mapminmax(test_input);
test_input = removeconstantrows(test_input);
hidden = [];
output = [];
indx = [];
for j = 1:80
%%%%%%1st Layer Calculation %%%%%
for k = 1:44
weighted_sum = sum(times(test_input(:,j),weight_input(k,:)'));
hidden(k,j) = 2/(1+exp(-2*(weighted_sum + bias_input(k))))-1; %%%Tansig Function
end
%%%%%%2nd Layer Calculation %%%%%
for k = 1:40
weighted_sum = sum(times(hidden(:,j),weight_hidden(k,:)'));
output(k,j) = 2/(1+exp(-2*(weighted_sum + bias_hidden(k))))-1; %%%Tansig Function
end
output = mapminmax(output);
[c,cm] = confusion(test_target,output);
fprintf('Percentage Correct Classification : %f%%\n', 100*(1-c));
%%%%%%Output : Percentage Correct Classification : 90 % %%%%%%
Why is there a difference in percentage of correct classification when both are expected to be equal?

採用された回答

Greg Heath
Greg Heath 2015 年 4 月 7 日
MAPMINMAX is not used correctly:
1. The parameters obtained from the training input should be used on the test input.
2. The inverse parameters obtained from the training target should be used on the test output.
Thank you for formally accepting my answer
Greg
  2 件のコメント
Sai Kumar Dwivedi
Sai Kumar Dwivedi 2015 年 4 月 7 日
I am really sorry but I couldn't get you.
After training the network by train_input, i tested the network using two methods.
1) I used the test_input on the 'net' obtained from training and used confusion function to see accuracy when compared with real output i.e 'test_target'.
2) In 2nd method, i obtained the weights and biases from 'net' and computed the result of the network when fed with 'test_input' and stored results in 'output'. Then used confusion again to check accuracy when compared with real output i.e 'test_target'.
Or am I wrong in this approach?
Sai Kumar Dwivedi
Sai Kumar Dwivedi 2015 年 4 月 14 日
@Greg : What you meant by inverse parameters obtained from training target?

サインインしてコメントする。

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeSequence and Numeric Feature Data Workflows についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by