GPU vs. CPU in training time
1 回表示 (過去 30 日間)
古いコメントを表示
I am doing deep learning on a CNN composed of four convolutional-max-pooling layers with about 100 kernels at each convolutional layer. I am using .mat files that each contain a matrix of size 22x1000 as the training data. An image datastore has been created as a container of the training data where I use a special ReadFun to read those .mat files.
GPU: GTX 1660 super
CPU: i7-9700
MATLAB version: R2019b
hard drive: SSD nvme m.2
Mini-batch size: 400
The training time of one epoch on the GPU is just half that of the CPU! I believe that the training time on GPU must be much less than on CPU only which is not the case with me. Any suggestions to handle that?
May raw data is EEG signals. Each EEG trial composed of 22x1000, where 22 is the number of time signals and 1000 is the number of time points in each signal. Any suggestions to store the data in a different way or creating a different datastore to achieve better GPU utilization?
3 件のコメント
Joss Knight
2020 年 12 月 11 日
If the 22 time signals are different channels of each observation then your input data should be 1000-by-1-by-22 for a temporal convolutional network using 2-D convolution.
採用された回答
Joss Knight
2020 年 12 月 11 日
Use the DispatchInBackground training option to improve throughput when your data access and preprocessing is costly.
0 件のコメント
その他の回答 (0 件)
参考
カテゴリ
Help Center および File Exchange で Deep Learning Toolbox についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!