Prediction differs during training with result
2 ビュー (過去 30 日間)
古いコメントを表示
I have twelve weighted classes that I train with a large augmented training and validation pixelLabelImageDatastore.
Created with:
lgraph=deeplabv3plusLayers(imageSize, numel(classes), 'resnet18');
lgraph = replaceLayer(lgraph, "classification", pixelClassificationLayer('Name','labels','Classes',tbl.Name,'ClassWeights',classWeights));
lgraph = replaceLayer(lgraph, "data", imageInputLayer(imageSize,"Name","data","Normalization","none"));
The training accuracy converges very fine to about 99,3% (98,5% - 99,7) and the loss to about 0.05 (for both training and validation).
When I test the generated DAGNetwork with "jaccard", only the first ten classes have high IOU, and the last 2 are zero! I also tested different normalizations such as zscore - always the same result. When I use the "predict" or "semanticseg" functions to check individual images, classes 11 and 12 seem to be poorly learned indeed.
But if I set a breakpoint in the "forwardLoss" function in "SpatialCrossEntropy.m" during the training and examine e.g. class 11 with "imshow(Y(:,:,11))", everything is fine learned!
What happens in "trainNetwork()" when the training is finished? Under what circumstances do forwardLoss() scores differ?
4 件のコメント
Abhijit Bhattacharjee
2022 年 5 月 19 日
There might be more specifics in your code that need to be addressed 1-1. I'd suggest submitting a technical support request.
回答 (0 件)
参考
カテゴリ
Help Center および File Exchange で Image Data Workflows についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!