Deep Learning - Distributed GPU Memory

Hello,
I have many, very large input matrices (detector values) connected by Fully Conectet Layer and the output is a Regression Layer to reconstruct an image from it (Only 1 image is used at a time!). Due to the lack of local correlation, a Fully Connected Layer is necessary and CNN cannot be used. But this is beyond the VRAM.
  1. Therefore the question if Matlab with Fully Connected can distribute this?
I have the choice to buy 2x rtx 8000 (2x48GB) or 4x Titan RTX (4x24GB). A RTX 8000 costs 2.5x as much as the Titan RTX and has only the same performance but with twice the memory of a Titan RTX.
2. NV-Link distribute GPU-RAM?
Thanks

回答 (1 件)

Joss Knight
Joss Knight 2020 年 3 月 28 日

0 投票

No, there is nothing like what you are after, to distribute the weights of a fully connected layer across multiple GPUs. You could implement it yourself using parallel language constructs, but I assume this is not what you're after.

カテゴリ

ヘルプ センター および File ExchangeGPU Computing についてさらに検索

質問済み:

2020 年 3 月 24 日

回答済み:

2020 年 3 月 28 日

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by