- Input Data Size: You mentioned having 12 variables and 180,000 rows. Assuming each variable is a double-precision floating-point number, the total memory required for the input data can be calculated as follows: input_data_memory = 12 * 180,000 * 8 bytes
- Model Parameters: The memory required for storing the weights and biases of the neural network depends on the number of parameters. In a feedforward neural network, the number of parameters can be estimated by counting the connections between layers. For your network configuration, the number of parameters can be calculated as follows: num_parameters = (12 * 60) + (60 * 30) + (30 * 1) + 60 + 30 + 1.
- Memory for Training: During training, additional memory is required to store intermediate results, gradients for back propagation, and other variables. The exact memory usage during training depends on various factors, including the optimization algorithm used, mini-batch size, and the specific implementation.
NN memory usage calculation
18 ビュー (過去 30 日間)
古いコメントを表示
Hi all
Is there a way to calculate the memory a NN will use when training? My NN is net=feedforwardnet([60,30,1]) with 12 variables and 180000 rows. I'm using 10 MEX workers and have 32GB memory. feedforwardnet([120,60,30,1]) works and fits into memory but if I also add 240 the machine crashed. You may wonder why I'm doubling the amount each time but after much experimentation it makes it more accurate each time but of course a lot slower.
Steve Gray
0 件のコメント
採用された回答
Shivansh
2023 年 9 月 3 日
移動済み: Walter Roberson
2023 年 9 月 3 日
Hi Stephen,
Calculating the exact memory usage of a neural network during training can be challenging due to various factors such as the specific architecture, data size, and the framework used for training. However, we can estimate the memory requirements based on the number of parameters and the size of the input data.
In your case, you have a feedforward neural network with a hidden layer configuration of [60, 30, 1]. To estimate the memory usage, we need to consider the following:
To estimate the total memory usage during training, you can sum up the memory requirements mentioned above and consider any additional memory overhead based on your specific training setup.
For the doubling of neurons part, the doubling of neurons might improve the accuracy but it is a huge overhead for memory. Doubling of neurons can also lead to overfitting of data on the current distribution of data and poor accuracy on real life test data. You might want to set the model parameters by getting more understanding of the model rather than hit and trial. You can also employ smaller batches of data or employ parallel programming techniques.
その他の回答 (0 件)
参考
カテゴリ
Help Center および File Exchange で Deep Learning Toolbox についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!