Training Time delay neural networks with Parallel and GPU Computing ?
1 回表示 (過去 30 日間)
古いコメントを表示
Abdelwahab Afifi
2021 年 2 月 25 日
編集済み: Abdelwahab Afifi
2021 年 3 月 1 日
I’m trying to speed up the training of my 'timedelaynet' by using the GPU support that ÷ get from the parallel computing toolbox . Although I use the same network structure for both, when I compare the performance in case of CPU vs. GPU, CPU achieves better prtformance. and The training using GPU takes longer time than GPU. any explaination?
0 件のコメント
採用された回答
Anshika Chaurasia
2021 年 3 月 1 日
Hi,
Performance of GPU code is dependent on the algorithm used, data size of the problem and the GPU hardware used. Significant performance gain over CPU is seen when the algorithm is computationally intensive and the data size is large enough.
Refer to the link given below to know why GPU not outperform CPU in some cases:
Hope it helps!
0 件のコメント
その他の回答 (0 件)
参考
カテゴリ
Help Center および File Exchange で Image Data Workflows についてさらに検索
製品
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!