MATLAB Answers

GoodMic
0

semantic segmentation of 4D MRI using 3D-UNet

GoodMic
さんによって質問されました 2019 年 7 月 2 日
最新アクティビティ GoodMic
さんによって 回答されました 2019 年 7 月 19 日
Hi,
I am trying to apply the tutorial "3-D Brain Tumor Segmentation Using Deep Learning" in
in order to train a Deep 3-D U-Net neural network for segmentation of tumor in 4-D MRI images. In the training images, the tumor and peritumoral tissue were contoured. The images, and the label files, have 4 phases, have been cropped into regions around the tumor, renormalized and resized to 240*240**155*4, (same size as images used in the tutorial). The ground truth images are formatted as uint8 with two values, 0 for background and 1 for tumor.
Images and label files are stored in .mat format, divided into training, validation and test directories as in the above tutorial. Then I launch the following code for patch and augmentation:
patchSize = [32 32 32];
patchPerImage = 4;
miniBatchSize = 8;
patchds = randomPatchExtractionDatastore(volds,pxds,patchSize, ...
'PatchesPerImage',patchPerImage);
patchds.MiniBatchSize = miniBatchSize;
dsTrain = transform(patchds,@augment3dPatch);
Then the deep network is built following the tutorial, preceded by:
inputSize = [64 64 64 4];
I managed to follow the tutorial along all sections until the "Train the network" section, I verified that the images are correctly stored and formatted. When I launch the trainNetwork Section, with the doTraining option switched to “true”, I get the following error:
Error using trainNetwork (line 165)
The subscript vectors must all be of the same size.
Error in deepAnalysisFollowingtutorial (line 195)
[net,info] = trainNetwork(dsTrain,lgraph,options);
Caused by:
Error using sub2ind (line 69)
The subscript vectors must all be of the same size.
The " The subscript vectors must all be of the same size." seems a very basic mistake, which makes me think that I have mistaken something very basic (e.g. size or formats).
Please do you have any advice, about where to look at? Should I prefer Dicom over “mat” format for the data? Maybe the images should be rotated, e.g. from 64x64x64x4 to 64x64x4x64? Otherwise, where can I start for 4D segmentation using Deep Learning?

  0 件のコメント

サインイン to comment.

3 件の回答

回答者: Divya Gaddipati 2019 年 7 月 18 日
 採用された回答

Hi,
Since you defined the Input Layer size as [64 64 64 4], it is expecting the input you provide to be of size [64 64 64]. Whereas, the input you are providing is of size [32 32 32], since you defined the patch size to be [32 32 32].
You could change the patch size as [64 64 64] so that it matches with the Input Layer size. Since, the resize factor will be less, the loss of information will be less in this case when compared to resizing it to [32 32 32], but it might slow down the training process.
If you want the training process to be faster, you could change the Input Layer size to be [32 32 32 4] so that it can accept the input of size [32 32 32].

  0 件のコメント

サインイン to comment.


GoodMic
回答者: GoodMic
2019 年 7 月 19 日

Dear Divya, thank you! that is for sure one of the mistakes I was doing. But I now get another error. This is the code, where I use a very simple CNN just for explaining, my images are 32x32x32x6 (six phases)
patchSize = [32 32 32];
patchPerImage = 8;
miniBatchSize = 16;
patchds = randomPatchExtractionDatastore(volds,pxds,patchSize, ...
'PatchesPerImage',patchPerImage);
patchds.MiniBatchSize = miniBatchSize;
layers = [
image3dInputLayer([32 32 32 6])
convolution3dLayer(3,12,'Stride',1,'Padding','Same')
batchNormalizationLayer
reluLayer
transposedConv3dLayer(3,12,'Stride',1,'Cropping',1)
batchNormalizationLayer
reluLayer
softmaxLayer
pixelClassificationLayer
]
opts = trainingOptions('sgdm', ...
'InitialLearnRate',1e-3, ...
'ExecutionEnvironment','CPU',...
'MaxEpochs',100);
trainingData = pixelLabelImageDatastore(volds,pxds);
analyzeNetwork(layers)
[net2,info] = trainNetwork(patchds,layers,opts);close
I get the following error:
Error using trainNetwork (line 165)
Matrix index is out of range for deletion.
Error in segmentationScript (line 62)
[net2,info] = trainNetwork(patchds,layers,opts);
Caused by:
Matrix index is out of range for deletion.
I tried changing pretty much everything, in particular the number of filters and nodes in the convolution layers. Could it be that segmentation does not work on 4-D images?

  1 件のコメント

Hi,
Is the size of the output of your network same as the ground truth ?

サインイン to comment.


GoodMic
Answer by GoodMic
on 19 Jul 2019

Hi Divya,
I tried with different sizes of network output, including same size as the ground truth (32x32x32x6), by changing the number of filters in the last convolution layer, e.g.
transposedConv3dLayer(3,6,'Stride',1,'Cropping',1)

  0 件のコメント

サインイン to comment.



Translated by