MATLAB Answers


Mapreduce does not seem to use all available cores

Mehrdad Oveisi さんによって質問されました 2014 年 11 月 10 日
最新アクティビティ Rick Amos
さんによって 回答されました 2014 年 11 月 24 日
I am using mapreduce on a machine with 16 cores. I make a pool with 15 workers (cores) which works fine. When I run mapreduce though, it only utilizes one or two workers: sometimes one for the mapper and one for the reducer. This is how I check which worker is processing the data (in addition to using a system monitor to watch CPU/core activities):
There are tens of files to be processed and each mapper is called with one file to process. Each time a mapper is called it loads and processes one file. I expect that during the first call to the mapper and while it is loading and processing the first file on one worker (core), there are other parallel calls to mapper to process the next files on other workers. However, this is not how it happens; it just sequentially calls the mapper on the same worker. Sometimes it uses a second worker for the reducer calls. So at most it uses two workers, while there are 15 available in the pool.
What would be a simple code to check if mapreduce is making use of all the available cores?
EDIT: Actually now I can confirm that the mapper is always run by a single worker, but the reducer may be run by a few different workers, as expected.
Your help is appreciated, Mehrdad

  10 件のコメント

Mehrdad Oveisi 2014 年 11 月 14 日
Thank you for the workaround, I will try it.
I would also appreciate it if you could let me know how the .mat files should be modified to be suitable for datastores.
  • The .mat file should only have a single table and no other variables,
  • or just the table should be not be inside a struct,
  • or ...?
Rick Amos
2014 年 11 月 17 日
Currently, the one very specific form of mat files that can be read by datastore is the output of another mapreduce call. An unofficial shortcut that creates such a mat file is the following code:-
data.Key = {'Test'};
data.Value = {struct('a', 'Hello World!', 'b', 42)};
save('myMatFile.mat', '-struct', 'data');
ds = datastore('myMatFile.mat');
Mehrdad Oveisi 2014 年 11 月 19 日
Thank you Rick! I found your reply here useful. So I thought it's good to have a separate thread for this tip.

サインイン to comment.

1 件の回答

Rick Amos
回答者: Rick Amos
2014 年 11 月 24 日

In R2014b, there are some limitations with the minimum size of data that can be parallelized. To avoid this limitation, the input datastore must contain at least one of the following:
  1. Multiple files, where each file will be handled in parallel.
  2. Files that are larger than 32 MB, where each 32 MB will be handled in parallel.
If the input datastore contains a single small file, you will need to find a way to split that file into multiple files. For example, if the input datastore contains a single file listing many filenames (to the actual data), you can split this up into many files each containing a single or small number of filenames to ensure parallelism.

  0 件のコメント

サインイン to comment.

Translated by