GPU vs. CPU in training time

1 回表示 (過去 30 日間)
Ali Al-Saegh
Ali Al-Saegh 2020 年 12 月 5 日
回答済み: Joss Knight 2020 年 12 月 11 日
I am doing deep learning on a CNN composed of four convolutional-max-pooling layers with about 100 kernels at each convolutional layer. I am using .mat files that each contain a matrix of size 22x1000 as the training data. An image datastore has been created as a container of the training data where I use a special ReadFun to read those .mat files.
GPU: GTX 1660 super
CPU: i7-9700
MATLAB version: R2019b
hard drive: SSD nvme m.2
Mini-batch size: 400
The training time of one epoch on the GPU is just half that of the CPU! I believe that the training time on GPU must be much less than on CPU only which is not the case with me. Any suggestions to handle that?
May raw data is EEG signals. Each EEG trial composed of 22x1000, where 22 is the number of time signals and 1000 is the number of time points in each signal. Any suggestions to store the data in a different way or creating a different datastore to achieve better GPU utilization?
  3 件のコメント
Ali Al-Saegh
Ali Al-Saegh 2020 年 12 月 7 日
Awesome! This really reduced the training time on the GPU. Now the speedup is enhanced.
Now I am using imageInputLayer[22 1000 1] as the input layer of my CNN, is this correct for my case?
Joss Knight
Joss Knight 2020 年 12 月 11 日
If the 22 time signals are different channels of each observation then your input data should be 1000-by-1-by-22 for a temporal convolutional network using 2-D convolution.

サインインしてコメントする。

採用された回答

Joss Knight
Joss Knight 2020 年 12 月 11 日
Use the DispatchInBackground training option to improve throughput when your data access and preprocessing is costly.

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeDeep Learning Toolbox についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by