- Shallow Neural Networks with Parallel and GPU Computing
- Execute for-loop iterations in parallel on workers
- Parallel Computing Fundamentals
How to get the performance of more neural networks at once using a for-loop?
7 ビュー (過去 30 日間)
古いコメントを表示
Hi, I am trying to get the performance of more neural networks. So I created 100 networks at first.
% Train the Network
%[net,tr] = train(net,x,t);
% Train more networks for better performance
numNN = 100;
network_array = cell(1, numNN);
for i = 1:numNN
fprintf('Training %d/%d\n', i, numNN)
network_array{i} = train(network, x, t,'CheckpointFile','MyCheckpoint','CheckpointDelay',120);
save 20212101_Workspace_NN_Torsion % Biegung und Torsion
end
For one network the code for test and recalculation is
% Test the Network
y = net_hiddenlayersize6_sortiert(x);
e = gsubtract(t,y);
performance = perform(net_hiddenlayersize6_sortiert,t,y)
% Recalculate Training, Validation and Test Performance
trainTargets = t .* tr.trainMask{1};
valTargets = t .* tr.valMask{1};
testTargets = t .* tr.testMask{1};
trainPerformance = perform(net_hiddenlayersize6_sortiert,trainTargets,y)
valPerformance = perform(net_hiddenlayersize6_sortiert,valTargets,y)
testPerformance = perform(net_hiddenlayersize6_sortiert,testTargets,y)
So now I have to do it for 100 networks. Shall I use a for-loop here? My idea is like this:
% Test the Network, but for more networks
for l = 1:numNN
y{l} = network{1,l}(x);
end
e = gsubtract(t,y);
for l = 1:numNN
performance{l} = perform(network{1,l},t(l),y{1,l})
end
% Recalculate Training, Validation and Test Performance
trainTargets = t .* tr.trainMask{1};
valTargets = t .* tr.valMask{1};
testTargets = t .* tr.testMask{1};
trainPerformance = perform(netzwerk,trainTargets,y)
valPerformance = perform(netzwerk,valTargets,y)
testPerformance = perform(netzwerk,testTargets,y)
I am kinda stuck here. How can I test and do the recalculation for 100 networks at once? I found an example here
%Then, ten neural networks are trained.
net = feedforwardnet(10);
numNN = 10;
nets = cell(1, numNN);
for i = 1:numNN
fprintf('Training %d/%d\n', i, numNN)
nets{i} = train(net, x1, t1);
end
%Next, each network is tested on the second dataset with both individual performances and the performance for the average output calculated.
perfs = zeros(1, numNN);
y2Total = 0;
for i = 1:numNN
neti = nets{i};
y2 = neti(x2);
perfs(i) = mse(neti, t2, y2);
y2Total = y2Total + y2;
end
perfs
y2AverageOutput = y2Total / numNN;
perfAveragedOutputs = mse(nets{1}, t2, y2AverageOutput)
I appriciate any help!
0 件のコメント
回答 (1 件)
Vineet Joshi
2021 年 7 月 28 日
Hi
In order to run get the performance at once you can use the parallel performance capabilities of MATLAB.
The following documentations will help you in getting familiarized with the same. You can then customize it according to your application.
Hope this helps.
Thanks
0 件のコメント
参考
カテゴリ
Help Center および File Exchange で Sequence and Numeric Feature Data Workflows についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!