Self Organizing Map training question
古いコメントを表示
Hi,
I have a difficult question about using Matlab's neural network toolbox. I would like to train a SOM neural network with a data set; however, my data set is quite large. Because of this, I need to split the data into sections and train it individually. Here's my code now
%%Combination method
%IN THIS EXAMPLE - ITS POSSIBLE BECAUSE ITS A SMALL DATASET. IT IS NOT POSSIBLE FOR MY ACTUAL DATA
%Load and combine the data
data1 = [1:10:400;1:20:800]';
data2 = [400:1:440;800:1:840]';
combined = [data1;data2]';
% Create a Self-Organizing Map
dimension1 = 5;
dimension2 = 5;
net = selforgmap([dimension1 dimension2]);
% Train the Network
[net,tr] = train(net,combined);
%Plot combined results
plotsomhits(net,combined);
plotsomhits(net,data1');
plotsomhits(net,data2');
%%Iterative METHOD
%This is what I actually want to use to train the network
% Create a Self-Organizing Map
dimension1 = 5;
dimension2 = 5;
net = selforgmap([dimension1 dimension2]);
% Train the Network
data1 = [1:10:400;1:20:800]';
[net,tr] = train(net,data1');
data2 = [400:1:440;800:1:840]';
[net,tr] = train(net,data2');
% View the Network
combined = [data1;data2]';
plotsomhits(net,combined);
plotsomhits(net,data1');
plotsomhits(net,data2');
As you can tell - the results are skewed significantly because the data is trained twice. Is there anyway to limit the bias when you are training the second time?
5 件のコメント
Simon Nunn
2017 年 7 月 13 日
編集済み: Simon Nunn
2017 年 7 月 13 日
I am also having a similar issue to this.
First, I can see that the redefinition of the network with the second use of selforgmap() will completely discard any training performed by the first call to train().
However removing this redefinition of your network is not sufficient to solve the issue.
Having taken the time to read the manual I also experimented with the use of adapt() instead of train() this is supposed to perform one step of the training process for the purpose of not batch processing data, which works fine with NARX and other feedforward MLP, but with SOM it seems to reset the network in the same way that train() does.
Digging even deeper I started to experiment with calling the learning function learnsomb() directly with the intend of manually applying the delta weight matrix that it returns, but I've struggled to find a suitable input for A.
After some reverse engineering of the MATLAB code I've finally run out of steam and I've come here to find answers.
Here the code I have so far:
% lifted from debugging MATLAB code
% feval(learnFcn,net.IW{i,j}, ...
% PD{i,j,ts},
% IWZ{i,j},
% N{i},
% Ac{i,ts+numLayerDelays},
% t,
% e,
% gIW{i,j},
% gA{i},
% net.layers{i}.distances,
% net.inputWeights{i,j}.learnParam,
% IWLS{i,j});
w = net.IW;
d = net.layers{1}.distances;
a = net(P); % this is not right :(
LP = net.inputWeights{1}.learnParam;
[dW,ls] = learnsomb(w,P,[],[],a,[],[],[],[],d,LP,[]);
negar BAIBORDI
2023 年 6 月 29 日
移動済み: DGM
2023 年 6 月 29 日
Hello I have difficulty to understand sample hits plot. I want to know each neuron related to which of the data’s that I imported to self organizing map? Please help me.
negar BAIBORDI
2023 年 6 月 29 日
移動済み: DGM
2023 年 6 月 29 日
I couldn’t analyze this plot. Please guide me
DGM
2023 年 6 月 29 日
What plot?
Everybody else in this thread has been inactive for years. If you want to ask a question, ask a clear and specific question. Don't hide a tangent in a random dead thread somewhere and expect people to find it and guess what you want.
negar BAIBORDI
2023 年 6 月 30 日
Hello I have difficulty to understand sample hits plot. I want to know each neuron related to which of the data’s that I imported to self organizing map toolbox in matlab? Please help me. I attchesd the the picture to make it clear.
Best regards
採用された回答
その他の回答 (0 件)
カテゴリ
ヘルプ センター および File Exchange で Deep Learning Toolbox についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!