semantic segmentation of 4D MRI using 3D-UNet
13 ビュー (過去 30 日間)
古いコメントを表示
Hi,
I am trying to apply the tutorial "3-D Brain Tumor Segmentation Using Deep Learning" in
in order to train a Deep 3-D U-Net neural network for segmentation of tumor in 4-D MRI images. In the training images, the tumor and peritumoral tissue were contoured. The images, and the label files, have 4 phases, have been cropped into regions around the tumor, renormalized and resized to 240*240**155*4, (same size as images used in the tutorial). The ground truth images are formatted as uint8 with two values, 0 for background and 1 for tumor.
Images and label files are stored in .mat format, divided into training, validation and test directories as in the above tutorial. Then I launch the following code for patch and augmentation:
patchSize = [32 32 32];
patchPerImage = 4;
miniBatchSize = 8;
patchds = randomPatchExtractionDatastore(volds,pxds,patchSize, ...
'PatchesPerImage',patchPerImage);
patchds.MiniBatchSize = miniBatchSize;
dsTrain = transform(patchds,@augment3dPatch);
Then the deep network is built following the tutorial, preceded by:
inputSize = [64 64 64 4];
I managed to follow the tutorial along all sections until the "Train the network" section, I verified that the images are correctly stored and formatted. When I launch the trainNetwork Section, with the doTraining option switched to “true”, I get the following error:
Error using trainNetwork (line 165)
The subscript vectors must all be of the same size.
Error in deepAnalysisFollowingtutorial (line 195)
[net,info] = trainNetwork(dsTrain,lgraph,options);
Caused by:
Error using sub2ind (line 69)
The subscript vectors must all be of the same size.
The " The subscript vectors must all be of the same size." seems a very basic mistake, which makes me think that I have mistaken something very basic (e.g. size or formats).
Please do you have any advice, about where to look at? Should I prefer Dicom over “mat” format for the data? Maybe the images should be rotated, e.g. from 64x64x64x4 to 64x64x4x64? Otherwise, where can I start for 4D segmentation using Deep Learning?
0 件のコメント
採用された回答
Divya Gaddipati
2019 年 7 月 18 日
Hi,
Since you defined the Input Layer size as [64 64 64 4], it is expecting the input you provide to be of size [64 64 64]. Whereas, the input you are providing is of size [32 32 32], since you defined the patch size to be [32 32 32].
You could change the patch size as [64 64 64] so that it matches with the Input Layer size. Since, the resize factor will be less, the loss of information will be less in this case when compared to resizing it to [32 32 32], but it might slow down the training process.
If you want the training process to be faster, you could change the Input Layer size to be [32 32 32 4] so that it can accept the input of size [32 32 32].
0 件のコメント
その他の回答 (2 件)
GoodMic
2019 年 7 月 19 日
1 件のコメント
Divya Gaddipati
2019 年 7 月 19 日
Hi,
Is the size of the output of your network same as the ground truth ?
参考
カテゴリ
Help Center および File Exchange で Image Data Workflows についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!