why Nvidia A100 GPUs slower than RTX 3090 GPUs?

109 ビュー (過去 30 日間)
재호 곽
재호 곽 2022 年 5 月 13 日
コメント済み: Joss Knight 2022 年 5 月 16 日
Hello, we have RTX3090 GPU and A100 GPU.
Using the Matlab Deep Learning Toolbox Model for ResNet-50 Network, we found that the A100 was 20% slower than the RTX 3090 when learning from the ResNet50 model.
The questions are as follows.
1. I heard that the speed of A100 and 3090 is different because there is a difference between the number of CUDA cores and the number of Tensor cores, so can only use Cuda cores for Matlab?
If you can use it, I would appreciate it if you could send me a link if you have an example site using Tensor core.
2. You can specify single inference, double inference, and half inference methods when learning GPU. I heard that Matlab uses double inference automatically, so please check if it is the correct answer.
Thank you.

採用された回答

David Willingham
David Willingham 2022 年 5 月 13 日
  2 件のコメント
재호 곽
재호 곽 2022 年 5 月 16 日
編集済み: 재호 곽 2022 年 5 月 16 日
Thank you for your answer. However, I have already checked the URL you sent me. Currently, I have used GPU cord, and I am conducting training, not an inference. I wonder that A100 is slower than 3090 in the training process. I wonder if double precision is available in the training process in Matlab.
Joss Knight
Joss Knight 2022 年 5 月 16 日
It is possible to train models in double precision, using model functions, or using a dlnetwork and converting its weights to double precision before training.
However, I don't believe this is what you want. You won't get a speedup over the RTX 3090 training in single precision, it will still be considerably slower.

サインインしてコメントする。

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeGPU Computing についてさらに検索

製品


リリース

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by