minibatchqueue or arrayDatastore drops my data precision from double to single
古いコメントを表示
I get XTrain from MNIST by processImagesMNIST and put it on GPU, so its type is gpuArray dlarray.
then I use these code to make minibatches:
```
miniBatchSize = 128;
dsTrain = arrayDatastore(XTrain,IterationDimension=4);
% numOutputs = 1;
mbqTest = minibatchqueue(dsTrain,1, ...
MiniBatchSize = miniBatchSize, ...
MiniBatchFcn=@preprocessMiniBatch, ...
MiniBatchFormat="SSCB", ...
PartialMiniBatch="discard");
% numObservationsTrain = size(XTrain,4);
% numIterationsPerEpoch = ceil(numObservationsTrain / miniBatchSize);
% numIterations = numEpochs * numIterationsPerEpoch;
%% test batch order
i=0;
while hasdata(mbqTest)
i = i+1;
x = next(mbqTest);
if ~hasdata(mbqTest)
disp(i)
end
end
```
And I find that x is single gpuArray dlarray, XTrain is gpuArray dlarray.
I wonder which part makes it lower the precision.
And how to avoid this?
回答 (1 件)
Walter Roberson
2022 年 9 月 28 日
0 投票
Gpu training does not support double precision. If you look at the available options, precision cannot be selected.
カテゴリ
ヘルプ センター および File Exchange で Parallel and Cloud についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!