フィルターのクリア

Neural nw : Inputs and targets have different numbers of samples

1 回表示 (過去 30 日間)
farzad
farzad 2015 年 2 月 12 日
コメント済み: farzad 2015 年 12 月 20 日
Hi All
I have a code , I am just checking how it works , my input matrice is :
input = [0.0600000000000000 0.00100000000000000 45 0.0508000000000000 0.0127000000000000]
and the target is a 6 by 6 matrix
so using this code bellow , I get the mentioned error : Inputs and targets have different numbers of samples ,
Error in Neural (line 17) , [net,tr] = train(net,xn_tr,yn_tr);
here is the full code :
clc
clear
clear all
load('input.txt')
%load input
load ('taget.txt')
%normalizing data
[xn_tr,xs_tr] = mapstd(input);
[yn_tr,ys_tr] = mapstd(taget);
%%network
net=newff(xn_tr,yn_tr,[7 7],{'tansig'},'traingda');%7 hidden tanh layer gradian descent adaptive
net.trainParam.epochs =70;
net.trainParam.lr = 0.05;
net.trainParam.lr_inc = 1.05;
%training network
net.trainFcn='traingda';
[net,tr] = train(net,xn_tr,yn_tr);
%randomizing initial value f weight matrix
net = init(net);
net.trainParam.show = NaN;
u_t=mapstd('apply',x,xs_tr);
%simulating output
y_hat=sim(net,u_t);
%plotting performance
plotperform(tr)
mse=mse(y-y_hat)
  4 件のコメント
farzad
farzad 2015 年 2 月 14 日
Dear and respected Dr. Greg
I really appreciate your very kind attention , honestly I have a really short time , too short to learn this coding step by step ,the next topic will be optimization for me to learn , I wish I could do it fast , I am reading the MATLAB help,pdf on Neural NW, well in the nndatasets ,I get a bit lost, where to start ,and after all , seeing a lot of different ways that might not be similar to these code , I should come back to this code to see what is wrong in it
question 2 : I got this code from a friend who has worked on another topic , and I was going to use it
3-I do not know the difference and why newff is obsolete here , I wish you would tell me
4- I get this as an advice, thank you
5- which one do you intend ?
6- it's a problem I should solve asap
7- well each question has it's own problem , I could not follow all , specially with the error that I get , I yet could not find an answer for that myself
Greg Heath
Greg Heath 2015 年 2 月 14 日
Dear and respected Dr. Greg
% I really appreciate your very kind attention , honestly I have a really short time , too short to learn this coding step by step ,the next topic will be optimization for me to learn , I wish I could do it fast , I am reading the MATLAB help,pdf on Neural NW, well in the nndatasets ,I get a bit lost, where to start ,and after all , seeing a lot of different ways that might not be similar to these code , I should come back to this code to see what is wrong in it
I do not recommend using this code. Start out with the code in
help newpr
and
doc newpr
Then search the NEWSGROUP and ANSWERS using the search word newpr
NEWSGROUP
newpr 17 hits
greg newpr 8 hits
ANSWERS
newpr 68 hits
greg newpr 56 hits
% question 2 : I got this code from a friend who has worked on another topic , and I was going to use it
Again, not recommended
% 3-I do not know the difference and why newff is obsolete here , I wish you would tell me
NEWFF is obsolete because MATLAB replaced it 5 years ago. Previously,
NEWFIT for curve-fitting and regression. It automatically calls NEWFF
NEWPR for pattern-recognition and classification. It also automatically calls NEWFF
>> help newpr
newpr Create a pattern recognition network.
Obsoleted in R2010b NNET 7.0. Last used in R2010a NNET 6.0.4.
Similarly for newfit and newff.
=========================
Currently, MATLAB offers
FITNET for curve-fitting and regression. It automatically calls FEEDFORWARDNET
PATTERNNET for pattern-recognition and classification. It also automatically calls FEEDFORWARDNET
You can obtain the full online documentation of any function using help and doc in the command line. The biggest problem is that the corresponding examples tend to pretty Mickey-Mouse. However, they provide the basics from which you can start. On the other hand the scripts from the nntool are too detailed and do not emphasize the most important commands.
Finally, you can search the NEWSGROUP and ANSWERS for the examples in my posts. If you have any problems with them you can always alert me via email that you have posted a follow-up comment or question. I do not give direct advice via email.
% 4- I get this as an advice, thank you % % 5- which one do you intend ? % % 6- it's a problem I should solve asap % % 7- well each question has it's own problem , I could not follow all , % specially with the error that I get , I yet could not find an answer for % that myself
Finally, I suggest using patternnet instead of the obsolete newpr. However, if you want to go with newpr, see my suggestions above.
Hope this helps.
Greg

サインインしてコメントする。

採用された回答

Greg Heath
Greg Heath 2015 年 2 月 14 日
編集済み: Stephen23 2015 年 12 月 13 日
Here is a simplified example using the NEWFF example in the help and doc documentation. I omitted
  1. Using an inner for loop over multiple random weight initializations and data divisions. To see those type examples search on greg Ntrials
  2. Extracting the individual trn/val/tst performances via using the training record tr to obtain the corresponding indices.
% >> help newpr
% load simpleclass_dataset
% net = newpr(simpleclassInputs,simpleclassTargets,20);
% net = train(net,simpleclassInputs,simpleclassTargets);
% simpleclassOutputs = net(simpleclassInputs);
close all, clear all, clc, plt = 0
[ x, t ] = simpleclass_dataset;
[ I N ] = size(x) % [ 2 1000 ]
[ O N ] = size(t) % [ 4 1000 ]
trueclass = vec2ind(t);
class1 = find(trueclass==1);
class2 = find(trueclass==2);
class3 = find(trueclass==3);
class4 = find(trueclass==4);
N1 = length(class1) % 243
N2 = length(class2) % 247
N3 = length(class3) % 233
N4 = length(class4) % 277
x1 = x(:,class1);
x2 = x(:,class2);
x3 = x(:,class3);
x4 = x(:,class4);
plt = plt + 1
hold on
plot(x1(1,:),x1(2,:),'ko')
plot(x2(1,:),x2(2,:),'bo')
plot(x3(1,:),x3(2,:),'ro')
plot(x4(1,:),x4(2,:),'go')
Hub = -1+ceil( (0.7*N*O-O)/(I+O+1)) % 399
Hmax = 40 % Hmax << Hub
dH = 4 % Design ~10 candidate nets
Hmin = 2 % I know 0 and 1 are too small
rng(0) % Allows duplicating the rsults
j=0
for h=Hmin:dH:Hmax
j = j+1
net = newpr(x,t,h);
[ net tr y ] = train( net, x, t );
assignedclass = vec2ind(y);
err = assignedclass~=trueclass;
Nerr = sum(err);
PctErr(j,1) = 100*Nerr/N;
end
h = (Hmin:dH:Hmax)';
PctErr = PctErr;
results = [ h PctErr ]
Hope this helps.
Thank you for formally accepting my answer
Greg
  4 件のコメント
Greg Heath
Greg Heath 2015 年 12 月 13 日
The newPR code is for PatternRecognition and classification. Use newFIT for FITting the sine function.
farzad
farzad 2015 年 12 月 20 日
Dear Professor Heath
one curiosity, why don't you write a paper or book, explaining your method here ?

サインインしてコメントする。

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangePattern Recognition and Classification についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by