Why is there NaN in the weights of Convolutional layer in the deeplab V3+ semantic segmentation network
3 ビュー (過去 30 日間)
古いコメントを表示
I use the example in https://ww2.mathworks.cn/help/vision/examples/semantic-segmentation-using-deep-learning.html
however, I want to use the deeplab to segmentate the resomte sensing images. but it has four channel so I change the first conv layer's size from 3 to 4.
It did works, but the accuracy did not increase. I stoped the training, finding that the weights in the first conv layer became NaN. The loss is not NaN, I feel so confused that why this happens?
I have tried several times, all this problem occured.
0 件のコメント
回答 (1 件)
Ganesh Regoti
2019 年 7 月 30 日
Assuming that you are using resnet18 pretrained network, the model is trained with a layer size of 3 and it expects 3 channeled input.
If the size of convolution layer is changed then the model must be retrained to adjust the weights of the layers in the network.
Deeplab uses SoftMax which could be a reason for the output not having any zeros or Nans.
0 件のコメント
参考
カテゴリ
Help Center および File Exchange で Image Data Workflows についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!