Buying Decision M2 Pro/Max for Running Native Silicon

4 ビュー (過去 30 日間)
Dan
Dan 2023 年 7 月 24 日
回答済み: Shubham 2023 年 7 月 31 日
I'm making a decision between a 16" Pro 12core CPU/19core GPU/64gb RAM and 16" MAX 12core CPU/38core GPU/96gb RAM. I have to get the 38core GPU to get the 96gb RAM. I do run a bit of polyshape, but the bulk of my calculations are cycling through 3D/4D matrices to make new large matrices. I run parallel computing to do this (non-GPU) and parse out large matrices using parallel.pool.constant() for reducing overhead. My biggest matrix is close to 32gb right now. But I could run larger simulations with more RAM. Do you think I'm getting a reduction in performance if my largest matrix size is close to my RAM size? Thank you in advance.

回答 (1 件)

Shubham
Shubham 2023 年 7 月 31 日
Hi Dan,
It seems like your primary concern is whether having a matrix size close to your RAM size would result in a reduction in performance. In general, if your matrix size approaches or exceeds your available RAM size, it can lead to performance issues due to increased reliance on virtual memory.
When your matrix size exceeds the available RAM, the operating system may start using the hard drive as virtual memory, which is significantly slower compared to accessing data directly from RAM. This can result in increased computation time and overall system performance degradation.
To mitigate this issue, it is advisable to have sufficient RAM to comfortably accommodate your largest matrix size. In your case, considering that your largest matrix is close to 32GB, having 64GB or even 96GB of RAM would be beneficial for performance. This would allow your matrix to fit entirely in RAM, eliminating the need for excessive virtual memory usage and potential performance bottlenecks.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by