Large training set in Semantic Segmentation runs out of memory in trainNetwork

1 回表示 (過去 30 日間)
Lorant Szabo
Lorant Szabo 2019 年 11 月 18 日
コメント済み: Brian Derstine 2021 年 11 月 5 日
Dear Community!
Since the following question has not been answered yet, i would give an update with more details.
https://www.mathworks.com/matlabcentral/answers/413264-large-training-set-in-semantic-segmentation
I would like to train a dataset contains 600 piece of 1208x1920 images and 50 classes.
I used the following code just changed the classes and the paths:
However the training runs on the following error:
matlab error.png
Where the 1208x1920 is the image size, 50 is a number of classes and 200 is the number of the validation pictures.
With 1 piece of validation picture the training is starting
Memory: 64 GB
GPU : Titan X Pascal 12GB
We would like to know what is the best way to overcome this problem.

回答 (1 件)

Raunak Gupta
Raunak Gupta 2019 年 11 月 22 日
Hi,
As mentioned in the example that is referenced you may need to resize the image to a smaller size that can fit into the GPU memory or you may try reducing the MiniBatchSize to a smaller value like 4 or 2. If even 1 image doesn’t fit into the memory you need to resize the image of choose a smaller network to work with or increase the GPU memory on the system. Here since the imageDatastore is used, all the validation images won’t be read from the memory instead only MiniBatchSize’ number of images will be read.
Here since the image size is almost 3.4 times the size used in example, so I recommend first changing the MiniBatchSize to 2 as compared to 8 in the example.
For increasing the MiniBatchSize the best way is to increase the GPU Memory present on the system.
  4 件のコメント
Brian Derstine
Brian Derstine 2020 年 12 月 8 日
編集済み: Brian Derstine 2020 年 12 月 8 日
What do you mean by this: "unless the validation data is loaded specifically into the code,"?
Is there a particular code pattern that will cause the entire validation set to be loaded into memory?
reduce your validation dataset size and it should work.
Brian Derstine
Brian Derstine 2021 年 11 月 5 日
also, an update in 2021b may fix this bug: "You are correct. In MATLAB R2021a, there is a bug in the Neural Network Toolbox where, depending on the workflow, if the validation data is large, you may run out of memory on the GPU. This has been reported in image segmentation and LSTM workflows.
The workaround is to reduce the validation data set size or train without validation data. Reducing the "miniBatchSize" does not fix this issue.
A patch for this bug was made in MATLAB R2021b. You may want to consider using this version of MATLAB to avoid encountering this issue." (response from matlab support)

サインインしてコメントする。

カテゴリ

Help Center および File ExchangeImage Data Workflows についてさらに検索

製品


リリース

R2018b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by