Multiple Input Single Output Segmentation using Deep Learning

19 ビュー (過去 30 日間)
Koshy
Koshy 2019 年 4 月 27 日
コメント済み: 马瑞 李 2021 年 1 月 21 日
I have 4 modal volumetric image data and output segemented data. I have to create a multi input DAG network, and I have succesfully created it using lgraph..
But I cannot able to train the network using trainNetwork. It shows error that only one input can be feed to trainNetwork..
My code is below, store1, store2, store3, store4 are four input 3d datastore and pxd is the output datastore
inputSize = [64 64 64];
layers1 = [
image3dInputLayer(inputSize,'Normalization','none','Name','input1')
convolution3dLayer(3,155,'Padding','same','Name','conv_11')
maxPooling3dLayer(4,'Name','maxpool1')];
layers2=[
image3dInputLayer(inputSize,'Normalization','none','Name','input2')
convolution3dLayer(3,155,'Padding','same','Name','conv_21')
maxPooling3dLayer(4,'Name','maxpool2')];
layers3=[
image3dInputLayer(inputSize,'Normalization','none','Name','input3')
convolution3dLayer(3,155,'Padding','same','Name','conv_31')
maxPooling3dLayer(4,'Name','maxpool3')];
layers4=[
image3dInputLayer(inputSize,'Normalization','none','Name','input4')
convolution3dLayer(3,155,'Padding','same','Name','conv_41')
maxPooling3dLayer(4,'Name','maxpool4')];
concat1=concatenationLayer(3,4,'Name','depth_1');
outlayer=[
transposedConv3dLayer(3,620,'stride',2,'cropping','same','Name','tconv_o1')
convolution3dLayer(1,numLabels,'Name','convLast');
softmaxLayer('Name','softmax');
dicePixelClassification3dLayer('output')];
lgraph = layerGraph;
lgraph = addLayers(lgraph,layers1);
lgraph = addLayers(lgraph,layers2);
lgraph = addLayers(lgraph,layers3);
lgraph = addLayers(lgraph,layers4);
lgraph = addLayers(lgraph,concat1);
lgraph = addLayers(lgraph,outlayer);
lgraph = connectLayers(lgraph,'maxpool1','depth_1/in1');
lgraph = connectLayers(lgraph,'maxpool2','depth_1/in2');
lgraph = connectLayers(lgraph,'maxpool3','depth_1/in3');
lgraph = connectLayers(lgraph,'maxpool4','depth_1/in4');
lgraph = connectLayers(lgraph,'depth_1','tconv_o1');
plot(lgraph)
miniBatchSize = 1;
options = trainingOptions('rmsprop', ...
'MaxEpochs',1, ...
'InitialLearnRate',0.01, ...
'LearnRateSchedule','piecewise', ...
'LearnRateDropPeriod',5, ...
'LearnRateDropFactor',0.95, ...
'Plots','training-progress', ...
'Verbose',false, ...
'MiniBatchSize',miniBatchSize);
[net,info] = trainNetwork({store1,store2,store3,store4},pxds,lgraph,options);
Error shown is
Error in line:
[net,info] = trainNetwork({store1,store2,store3,store4},pxds,lgraph,options);
Caused by:
Network: Too many input layers. The network must have one input layer.
Detected input layers:
layer 'input1'
layer 'input2'
layer 'input3'
layer 'input4'
Please help me to solve this problem or suggest another way to train multi input image data

採用された回答

gonzalo Mier
gonzalo Mier 2019 年 4 月 28 日
I will copy and paste the answer of Mahmoud Afifi:
"One idea is to feed the network with concatenated inputs (e.g., image1;image2) then create splitter layers that split each input. The problem here is that you have to feed the network with .mat files, not image paths. Another idea is to store your images as tiff files which can hold 4 channels. In this case, you can store a colored image (3 channel) and a grayscale one. Have a look at this example https://www.mathworks.com/matlabcentral/fileexchange/65065-two-stream-cnn-for-gender-recognition-using-hand-images?s_tid=FX_rc1_behav .. see twoStream.m file. "
  1 件のコメント
gonzalo Mier
gonzalo Mier 2019 年 5 月 12 日
編集済み: madhan ravi 2019 年 5 月 12 日
If this answer helped you, please accept it

サインインしてコメントする。

その他の回答 (4 件)

Mahmoud Afifi
Mahmoud Afifi 2019 年 10 月 29 日
編集済み: Mahmoud Afifi 2019 年 10 月 29 日
I have uploaded a more efficient code for a similar task. You can find it here

Mohamed Abdelwahab
Mohamed Abdelwahab 2020 年 1 月 30 日
what about sequence input (lstm) how can we use mutiple inputs?
  1 件のコメント
马瑞 李
马瑞 李 2021 年 1 月 21 日
Have you solved your problem? I have the same confusion.

サインインしてコメントする。


Yang YoonMo
Yang YoonMo 2019 年 11 月 12 日
How can I solve this problem??
I am training with 2 input and datastore return 2 input. Then the following problems arises:
Invalid training data for multiple-input network. For a network with 2 inputs and 1 output, the datastore read function must return an M-by-3
cell array, but it returns an M-by-2 cell array.
  1 件のコメント
Mahmoud Afifi
Mahmoud Afifi 2019 年 11 月 12 日
編集済み: Mahmoud Afifi 2019 年 11 月 12 日
Check this Link Hope it helps.

サインインしてコメントする。


Y. K.
Y. K. 2020 年 4 月 30 日
I want to build two inputs, one output network.
But the first input is an image and the second input is a vector.
When I try to train the network with cell array including two sub arrays (one for images, one for vector), I got an error.
"Invalid training data for multiple-input network. For multiple-input training, use a single datastore."
I created 4D image array, a vector array for each input and labels array for training.
How can I combine these data to a DataStore.
Matlab Datastore couldn't get the data from defined variable from workspace.
  2 件のコメント
Mahmoud Afifi
Mahmoud Afifi 2020 年 4 月 30 日
You can think of packing your input in the image using a custom image read function, then unpack it later.
Y. K.
Y. K. 2020 年 5 月 2 日
It could be smarter way than this.

サインインしてコメントする。

カテゴリ

Help Center および File ExchangeImage Data Workflows についてさらに検索

製品


リリース

R2019a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by