Custom deep learning loop take more memory than using trainNetwork()?

2 ビュー (過去 30 日間)
Qiao Hu
Qiao Hu 2020 年 10 月 20 日
コメント済み: Qiao Hu 2020 年 10 月 31 日
Hi,
I followed the instructions from the link below to create a custom training loop by using a U-Net architecture.
By the same network architecture and with same "multi-gpu" setting (I have 2 RTX 2060 GPU), I found that I can only take 4 minibatch size at best in the custom training loop, while 16 minibarch size at best by using the built-in trainNetwork() function.
Is this a normal phenomenon that custom loop training will take more gpu memory than trainNetwork()?
Thanks!

採用された回答

Shashank Gupta
Shashank Gupta 2020 年 10 月 28 日
Yes, it is an expected behaviour, the custom loop does take some extra amount of memory while the existing function trainNetwork is very optimised. More custom loop more inefficiency and thus more GPU memory usage. Neverthless, you can optimise the custom training loop but even then we can't be fully sure that it is as much optimised as trainNetwork.
I hope this clear some of your confusion.
  3 件のコメント
Shashank Gupta
Shashank Gupta 2020 年 10 月 30 日
Hey Qiao,
Have a look at this Link, this might enable you to use parallel capabilities in the custom training loop.
Currently, there is no specific reference that talks about the optimisation of custom loop specifically because it is hard to generalise anything and come up with a documented reference. Generally these jobs are really subjective, depends on what sort of things you want to implement, Nevertheless, some suggestions, look for dlarray capable function for quick computing, Try using more MATLAB function rather than implementing your own. try to use as less code as necessary.
Qiao Hu
Qiao Hu 2020 年 10 月 31 日
Thanks a bunch!

サインインしてコメントする。

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeParallel and Cloud についてさらに検索

製品


リリース

R2020b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by