Which is the difference between 'multi-gpu' and 'parallel-gpu' in 'trainingOption' function of the DeepLearning Toolbox?

4 ビュー (過去 30 日間)
Hi everyone,
I have two NVIDIA RTX 3060 installed on my local computer and I want to train a neural network in parallel on both GPUs. I am worried about which is the best strategy between 'multi-gpu' and 'parallel-gpu'. Does anyone know how they work and which is the difference between 'multi-gpu' and 'parallel-gpu'?
Thank you.

採用された回答

Matt J
Matt J 2024 年 5 月 22 日
編集済み: Matt J 2024 年 5 月 22 日
According to the doc, 'parallel-gpu' has the additional capability of being able to use remote GPUs. Since that doesn't apply to the hardware environment you describe, you can probably use either one.

その他の回答 (1 件)

Joss Knight
Joss Knight 2024 年 6 月 14 日
The purpose of 'multi-gpu' is effectively to try to ensure you are using a local pool with numGpus workers, rather than needing to understand anything about configuring a cluster. So either can work, but multi-gpu will give you helpful errors if you're doing something you didn't intend.

カテゴリ

Help Center および File ExchangeImage Data Workflows についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by