How to check if the parallel server GPU can handle a particular network

1 回表示 (過去 30 日間)
Miles Brim
Miles Brim 2022 年 6 月 3 日
回答済み: Vatsal 2023 年 10 月 6 日
I am running through a hyperparameter sweep of different network parameters. These include the number of LSTM layers, the number of hidden units in each layer, as well as a shape parameter as I call it (have the number of hidden units increase, decrease, or remain the same with subsequent layers).
I fully expect some of the setting to be too large to use the gpu. And I see this with the error returned regarding gpu memory.
What I would like to do is the following. Have a function compute how much gpu memory a network will take by looking at the layers, and then update the training options to use the cpu if it is too large. See the pseudo-code below where 'i' represent one set of hyperparameters.
How can I do this on my machine? Also, how can I do this on MATLAB Parallel Server?
if GPUmemory<NETmemory(layers{i})
trainingoptions{i}= ... 'ExecutionEnvironment','cpu' ...
else
trainingoptions{i}= ... 'ExecutionEnvironment','gpu' ...
end

採用された回答

Vatsal
Vatsal 2023 年 10 月 6 日
Hi Miles Brim,
I understand that you are interested in creating a function that calculates the memory usage of a network based on its layers, and then updates the training options to utilize the CPU if the memory requirement is too high. To achieve this, you can utilize the "whos" function, which returns the network's memory in bytes. Additionally, to gather information about the available GPU devices and their respective memory, you can make use of the "gpuDevice" function.
You can also refer to the MATLAB documentation for "gpuDevice" to obtain more information on its usage and syntax. The link is provided below: -

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeParallel and Cloud についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by