MATLAB Answers

0

Run Matlab file on each Node of Hadoop Cluster

Truong Nang Toan さんによって質問されました 2019 年 10 月 16 日 9:02
最新アクティビティ Truong Nang Toan さんによって コメントされました 2019 年 10 月 17 日 1:48

Hi everybody,
I want to buid a cluster which using Hadoop. But I don't know that I can run file Matlab with each Datanode ?
I will install Matlab on each Node or there is another way to solve problem?
Please help me. Thanks a lot.

  0 件のコメント

サインイン to comment.

1 件の回答

Steven Lord
回答者: Steven Lord
2019 年 10 月 16 日 13:45
 採用された回答

This documentation page describes how to configure a Hadoop cluster so that client MATLAB sessions can submit to it. The first step in that workflow, "Cluster Configuration", links to a page that describes how to install MATLAB on the worker nodes.
I don't think this process has changed in recent releases, but if you're using a release older than the current release (the online documentation is for the current release, right now that is release R2019b) you probably want to find the equivalent of that page in the installation of one of the clients who are planning to submit to the Hadoop cluster. Alternately find the correct release's documentation from the documentation archive.

  3 件のコメント

Truong Nang Toan 2019 年 10 月 16 日 14:54
Thank you very much, Steven Lord.
I also have a final question.
For example I have a Hadoop cluster including 1 NameNode and 3 DataNodes. After spliting and distributing the dataset file on 3 DataNodes, each DataNode will use MATLAB to run that files, and finally Mapreduce step to create the out file to NameNode.
How can I do that and that above steps are logical?
Thanks.
Jason Ross
2019 年 10 月 16 日 15:42
Hadoop handles the splitting and distribution of your data when you put the data into HDFS using something like "put" or "copyFromLocal". You can specify your datasets using the datastore command. The mapreduce step runs as a job on the YARN scheduler, which starts the MATLAB install you specify on the cluster object. There are a variety of ways to get your data back -- you can write it to HDFS.
We have a number of examples and documentation on how to use mapreduce here. There is a worked example at the bottom of the page that shows the mapreduce workflow.
Truong Nang Toan 2019 年 10 月 17 日 1:48
Thank you very much, Jason Ross.

サインイン to comment.



Translated by