How to train feedforward network to solve XOR function

5 ビュー (過去 30 日間)
Albert
Albert 2013 年 2 月 16 日
回答済み: ga 2024 年 5 月 21 日
im new in matlab, please sorry if its stupid question. and sorry my english.
trying to train feedforward network to solve XOR function
1 hidden layer with 2 neurons, other settings are default: TANSIG, Backprop, TRAINLM, LEARNGDM, MSE
R2012b matlab version
close all, clear all, clc, format compact
p = [0 1 0 1 ; 0 0 1 1];
t = [0 1 1 0];
net = feedforwardnet(2,'trainlm');
net = train(net,p,t);
a = net(p)
ive tried this code, and tried 'nntool' and 'nnstart' too. its always seems like training algorithm splits 'p' set for
2 - training set,
1 - validation set,
1 - testing set
as a result - network is training on partial data (2 pair of digits instead 4), and training process generates Validation done or Minimum gradient reached (1.00e-010) in very few iteration (1-10 iterations) and simulation shows that network untrained.
  1. Is my guess right (about splitting 'p' set)?
  2. how i can manually give validation data (input and output sets) to training algorithm?
  3. should i somehow expand 'p' and 't' sets, and then use divideblock?
  4. any other ideas?
thanx!

採用された回答

Greg Heath
Greg Heath 2013 年 2 月 16 日
編集済み: Greg Heath 2013 年 2 月 16 日
1.[ I N ] = size(x) % [ 2 4 ]
[ O N ] = size(t) % [ 1 4 ]
Neq = prod(size(t) % 4 = No. of training equations
2. For tthis small data set it doesn't make sense to use data division for validation stopping. So,
net.divideFcn = 'dividetrain'; % or equivalently, = ' ';
3. Since the No. of estimated weights for H hidden nodes is
%Nw = (I+1)*O = 3 for H=0
%Nw = (I+1)*H+(H+1)*O for H >0
the condition Neq >= Nw yields the following upper bound for H
Hub = (Neq-O)/(I+O+1) % 3/4
which is only possible for H = 0 (no hidden layer). However from a 2-dimensional plot we know that it will take at least 2 hidden nodes to separate the "0" class diagonal corners [ 0 1; 0 1 ] from the "1" class diagonal corners [ 1 0 ; 0 1].
Subsequently, for H = 2, Nw = 9 > Neq = 4. Therefore, there will be an infinite number of solutions for the weights.
net = patternnet(2); % for classification
4. Choose MSEgoal so that the coefficient of determination ( or R^2, see wikipedia) is >= 0.99 . Then the model will represent at least 99% of the biased target variance:
net.trainParam.goal = 0.01*var(t',1);
5. The success of the design depends on the placement of the random initial weights. Therefore it may be necessary to make Ntrials >= 10 separate designs (use a do loop).
6. When training the net use the extended output form
[ net tr y e ] = train(net,x,t);
Then, everything you need to know, besides the output y and error e, can be obtained directly from the training structure tr.
7. It is STRONGLY recommended that somewhere along the line you should investigate the contents of tr.
Hope this helps.
  • Thank you for formally accepting my answer. *
Greg
  2 件のコメント
Albert
Albert 2013 年 2 月 17 日
close all, clear all, clc, format compact
p = [0 1 0 1 ; 0 0 1 1];
t = [0 1 1 0];
net = feedforwardnet(2,'trainlm');
net.trainParam.goal = 0.01*var(t',1);
net.divideFcn = 'dividetrain';
net = train(net,p,t);
a = net(p)
works almost perfectly, net.divideFcn = 'dividetrain'; helped
but 1 of 10 experiment network randomly falls into some local minimum and cant get out. Number of iteration goes to 250 - 500 iteration and breaks on minimum gradient reached, untrained.
Same result i get with net = patternnet(2,'trainlm'); instead net = feedforwardnet(2,'trainlm');
i think the reason is bad initial weights maybe
anyway this is much better then it was. Thanx again.
Greg Heath
Greg Heath 2013 年 2 月 17 日
編集済み: Greg Heath 2013 年 2 月 17 日
>net = feedforwardnet(2,'trainlm');
net = patternnet(2); % for classification
net = fitnet(2); % for regression
net = feedforward(2); % NEVER
>net = train(net,p,t);
>a = net(p)
[ net, tr, a ] = train(net,p,t);
NMSE = tr.perf/var(t')
R2 = 1- NMSE
>but 1 of 10 experiment network randomly falls into some local minimum and cant get out. Number of iteration goes to 250 - 500 iteration and breaks on minimum gradient reached, untrained.
No fault on your part. This is normal. That is why precisely why you have to design multiple nets.
In general, you would rank the nets by their validation error and predict generaliztion error by using the test set error on the best net chosen by the validation set.

サインインしてコメントする。

その他の回答 (4 件)

Albert
Albert 2013 年 2 月 16 日
ive just tried expand p and t, just copyed digits 10 times
now network learning, but i forced to set
max_fail = 100 (validation check)
and
min_grad = 1e-15 (minimum gradient)
otherwise training process still breaks (((
  2 件のコメント
Greg Heath
Greg Heath 2013 年 2 月 17 日
編集済み: Greg Heath 2013 年 2 月 17 日
Never use max_fail above 10.
Do you understand it's function?
You don't need to change min_grad ... Something else is wrong.
Albert
Albert 2013 年 2 月 17 日
max_fail prevents overlearning i guess? i think big max_fail value gives effect like i turn off validation check.
now i use max_fail=6, thanx to you - net.divideFcn = 'dividetrain'; helped very well. no more validation check breaks.
min_grad sometimes breaks training when network get into local minimum. Now it is not necessary. Now i use default min_grad.

サインインしてコメントする。


Albert
Albert 2013 年 2 月 17 日
Thank you Greg for thorough explanation.
much information for further analysis! )

Sarita Ghadge
Sarita Ghadge 2017 年 9 月 15 日
clc; close all; clear all;
P=[0 0 1 1; 0 1 0 1]; T=[0 1 1 0];
net= feedforwardnet(200);% 200-hidden layer
net.trainFcn = 'trainbr';
net.divideFcn = 'dividetrain';
[net, tr]= train(net,P,T)
a=net(P(:,1))
a=net(P(:,2))
a=net(P(:,3))
a=net(P(:,4))
it works for exor using feedforwardnet with >=150 hidden layer

ga
ga 2024 年 5 月 21 日
Train the neural network using a two-input XOR gate knowing the initial values:
w1 = 0.9;
w2 = 1,8;
b = - 0.9;
Requirements achieved:
Analyze the steps to train a perceptron neural network.
Training programming using Matlab software.
Use nntool for survey and analysis

カテゴリ

Help Center および File ExchangeDeep Learning Toolbox についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by