parfor load balancing chunksize
10 ビュー (過去 30 日間)
古いコメントを表示
I have function that uses a parfor (~100 iteration) evaluating another function. However one of the workers is two times faster than the other two (it uses a GPU that is two times faster, than the ones used by the other workers). Suddenly the usage of worker one (the fast one) stops, while the other ones are still calculating a lot of iterations (say 5-10 each). I suspect that the worker one is out of available chunks of the parfor load balancing whilst the other ones are still busy with one of the larger chunks.
Is there a way to change the maximal chunksize to for instance 2 or 3 such that the problem of unexploited resources is circumvented?
0 件のコメント
採用された回答
Edric Ellis
2018 年 3 月 22 日
parfor offers no means of controlling the chunk size. parfeval allows full control over how you split work up - perhaps you can use that instead (unfortunately, this will require a bit of restructuring of your code).
0 件のコメント
その他の回答 (1 件)
William Smith
2018 年 3 月 26 日
編集済み: William Smith
2018 年 3 月 26 日
This issue of heterogenous workloads (or in your case heterogenous workers) and parfor's lack of proper support for them has come up a number of times over the years.
I solved this in my own domain because I know roughly how long it will process each piece of work, in advance. I then used the 'Longest Processing Time' scheduling algorithm, described at https://en.wikipedia.org/wiki/Multiprocessor_scheduling , to preallocate an array of length parpool().NumWorkers . Each element of the array has multiple pieces of work, which is what my scheduling algorithm optimises.
0 件のコメント
参考
カテゴリ
Help Center および File Exchange で Parallel for-Loops (parfor) についてさらに検索
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!