How to add Inception-Res block and Dense-Inception block in 3D U-Net Layers
4 ビュー (過去 30 日間)
古いコメントを表示
Dear All,
I have the coding below,
lgraph = layerGraph();
tempLayers = [
image3dInputLayer([128 128 128 3],"Name","ImageInputLayer")
convolution3dLayer([3 3 3],16,"Name","Encoder-Stage-1-Conv-1","Padding","same","WeightsInitializer","he")
batchNormalizationLayer("Name","Encoder-Stage-1-BN-1")
reluLayer("Name","Encoder-Stage-1-ReLU-1")
convolution3dLayer([3 3 3],32,"Name","Encoder-Stage-1-Conv-2","Padding","same","WeightsInitializer","he")
batchNormalizationLayer("Name","Encoder-Stage-1-BN-2")
reluLayer("Name","Encoder-Stage-1-ReLU-2")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
maxPooling3dLayer([2 2 2],"Name","Encoder-Stage-1-MaxPool","Stride",[2 2 2])
convolution3dLayer([3 3 3],32,"Name","Encoder-Stage-2-Conv-1","Padding","same","WeightsInitializer","he")
batchNormalizationLayer("Name","Encoder-Stage-2-BN-1")
reluLayer("Name","Encoder-Stage-2-ReLU-1")
convolution3dLayer([3 3 3],64,"Name","Encoder-Stage-2-Conv-2","Padding","same","WeightsInitializer","he")
batchNormalizationLayer("Name","Encoder-Stage-2-BN-2")
reluLayer("Name","Encoder-Stage-2-ReLU-2")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
maxPooling3dLayer([2 2 2],"Name","Encoder-Stage-2-MaxPool","Stride",[2 2 2])
convolution3dLayer([3 3 3],64,"Name","Bridge-Conv-1","Padding","same","WeightsInitializer","he")
batchNormalizationLayer("Name","Bridge-BN-1")
reluLayer("Name","Bridge-ReLU-1")
convolution3dLayer([3 3 3],128,"Name","Bridge-Conv-2","Padding","same","WeightsInitializer","he")
batchNormalizationLayer("Name","Bridge-BN-2")
reluLayer("Name","Bridge-ReLU-2")
transposedConv3dLayer([2 2 2],128,"Name","Decoder-Stage-1-UpConv","BiasLearnRateFactor",2,"Stride",[2 2 2],"WeightsInitializer","he")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
concatenationLayer(4,2,"Name","Decoder-Stage-1-Concatenation")
convolution3dLayer([3 3 3],64,"Name","Decoder-Stage-1-Conv-1","Padding","same","WeightsInitializer","he")
batchNormalizationLayer("Name","Decoder-Stage-1-BN-1")
reluLayer("Name","Decoder-Stage-1-ReLU-1")
convolution3dLayer([3 3 3],64,"Name","Decoder-Stage-1-Conv-2","Padding","same","WeightsInitializer","he")
batchNormalizationLayer("Name","Decoder-Stage-1-BN-2")
reluLayer("Name","Decoder-Stage-1-ReLU-2")
transposedConv3dLayer([2 2 2],64,"Name","Decoder-Stage-2-UpConv","BiasLearnRateFactor",2,"Stride",[2 2 2],"WeightsInitializer","he")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
concatenationLayer(4,2,"Name","Decoder-Stage-2-Concatenation")
convolution3dLayer([3 3 3],32,"Name","Decoder-Stage-2-Conv-1","Padding","same","WeightsInitializer","he")
batchNormalizationLayer("Name","Decoder-Stage-2-BN-1")
reluLayer("Name","Decoder-Stage-2-ReLU-1")
convolution3dLayer([3 3 3],32,"Name","Decoder-Stage-2-Conv-2","Padding","same","WeightsInitializer","he")
batchNormalizationLayer("Name","Decoder-Stage-2-BN-2")
reluLayer("Name","Decoder-Stage-2-ReLU-2")
convolution3dLayer([1 1 1],5,"Name","Final-ConvolutionLayer","Padding","same","WeightsInitializer","he")
softmaxLayer("Name","Softmax-Layer")
pixelClassificationLayer("Name","Segmentation-Layer")];
lgraph = addLayers(lgraph,tempLayers);
% clean up helper variable
clear tempLayers;
lgraph = connectLayers(lgraph,"Encoder-Stage-1-ReLU-2","Encoder-Stage-1-MaxPool");
lgraph = connectLayers(lgraph,"Encoder-Stage-1-ReLU-2","Decoder-Stage-2-Concatenation/in2");
lgraph = connectLayers(lgraph,"Encoder-Stage-2-ReLU-2","Encoder-Stage-2-MaxPool");
lgraph = connectLayers(lgraph,"Encoder-Stage-2-ReLU-2","Decoder-Stage-1-Concatenation/in2");
lgraph = connectLayers(lgraph,"Decoder-Stage-1-UpConv","Decoder-Stage-1-Concatenation/in1");
lgraph = connectLayers(lgraph,"Decoder-Stage-2-UpConv","Decoder-Stage-2-Concatenation/in1");
plot(lgraph);
and the graph like below
![](https://www.mathworks.com/matlabcentral/answers/uploaded_files/1769759/image.png)
But I want to insert the Inception-Res block and Dense-Inception block in my 3D U-Net like picture below
![](https://www.mathworks.com/matlabcentral/answers/uploaded_files/1769764/image.png)
This is Inception-Res block propose
![](https://www.mathworks.com/matlabcentral/answers/uploaded_files/1769769/image.png)
This is Dense-Inception block propose
![](https://www.mathworks.com/matlabcentral/answers/uploaded_files/1769774/image.png)
Anyone can help me?
0 件のコメント
採用された回答
Malay Agarwal
2024 年 9 月 18 日
編集済み: Malay Agarwal
2024 年 9 月 18 日
You can implement the Inception-Res and Dense Inception-Res blocks by implementing a custom nested deep learning layer.
You can refer to the following resource, which shows how to create a residual block using nested layers: https://www.mathworks.com/help/releases/R2023a/deeplearning/ug/define-nested-deep-learning-layer.html.
I have attached an example implementation of the Inception-Res block to the answer. If you'd like to plot the internal network, you can use the following code:
layer = Inception_Res();
inputSize = [224 224 3];
layout = networkDataLayout(inputSize, "SSC");
layer = initialize(layer, layout);
plot(layer.Network)
Note that the code contains a commented line (line 72) in the initialize() method. I have commented this out since I am not sure about the dimensions of the convolutional operations and initializing the internal network is leading to an error. You'll need to modify the parameters of the convolutional operations to make sure all the dimensions work out.
The Dense Inception-Res block can be created similarly.
Hope this helps!
0 件のコメント
その他の回答 (0 件)
参考
カテゴリ
Help Center および File Exchange で Point Cloud Processing についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!