Image Classification Returning Different Results on Different Computers

2 ビュー (過去 30 日間)
Sofia Brown
Sofia Brown 2021 年 3 月 27 日
コメント済み: Sofia Brown 2021 年 9 月 5 日
Hello,
I am running a neural network for image classification using the Matlab deep learning toolbox. The algorithm contains three instances of convolution2dLayer, and it trains using TrainNetwork. If I run the exact same algorithm (identical code) on a different computer, my algorithm returns vastly different results: one computer yields a very high accuracy, around 99%, while the other never learns at all (the learning curve never increases above 50%). The only difference between the two computers is the Matlab version. The 99% accurate computer is on Matlab 2019a (Deep Learning Toolbox, version 12.1), while the 50% accurate computer is on Matlab 2019b (Deep Learning Toolbox, version 13.0). Is it expected that these two versions, which are so close together, would return such different results? I also find it surprising that the older version is better, but as machine learning is a blackbox, perhaps this is entirely possible? Are there any other reasons the results could be so different?

回答 (1 件)

Mahesh Taparia
Mahesh Taparia 2021 年 3 月 30 日
Hi
This is an unexpected behaviour. Check if all the data preprocessing steps are same, also the training parameters, number of epochs in both the network, weight initialization and other hyperparameters should be same. If the problem still remains, then share your code and relevant information, it will be helpful.
  3 件のコメント
Sofia Brown
Sofia Brown 2021 年 9 月 5 日
Hello,
Thank you very much for this response. I am sorry for my months long delay in writing back - I took a break from this project for the summer. I hope you are still able to be of assistance.
So, if I understand correctly, I just need to initialize a random seed value at the beginning of my code, like this (Using, say, 0 as the random seed):
rootFolder = 'TrainingAll5Sets';
categories = {'0deg', 'eighthdeg'};
rng(0);
%imds = imageDatastore(fullfile(rootFolder, categories), 'LabelSource', 'foldernames');
imds = imageDatastore(fullfile(rootFolder, categories), 'LabelSource', 'foldernames','FileExtensions','.png');
%Define Layers
layers = [
imageInputLayer([256 320 1])
convolution2dLayer(1,5,'Padding',2)
batchNormalizationLayer
reluLayer
maxPooling2dLayer(2,'Stride',2)
convolution2dLayer(6,15,'Padding','same')
batchNormalizationLayer
reluLayer
maxPooling2dLayer(2,'Stride',2)
convolution2dLayer(12,40,'Padding','same')
batchNormalizationLayer
reluLayer
fullyConnectedLayer(2)
softmaxLayer
classificationLayer];
%Set training options - use default options from 7.15.20
options = trainingOptions('sgdm', ...
'InitialLearnRate',0.00001, ...
'MaxEpochs',300, ...
'Shuffle','every-epoch', ...
'Verbose',false, ...
'Plots','training-progress');
%Train
[net, info] = trainNetwork(imds, layers, options);
I have not changed any parts of the code except line 4. By just making this small change, will it generate the same random weights during each run, as you mentioned? I'm not sure I completetly understand how rng works in relation to the rest of the code and if I need to incorporate it anywhere else.
Thank you!

サインインしてコメントする。

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by