Communicating MDCS Jobs with SLURM are not finished correctly

3 ビュー (過去 30 日間)
Stefan Harfst
Stefan Harfst 2017 年 3 月 9 日
コメント済み: Stefan Harfst 2022 年 9 月 9 日
We use MDCS with SLURM on a local HPC cluster, and in principle the integration of MDCS with SLURM has worked following the instruction found here. We had to make a fix in the file communicatingJobWrapper.sh as described here.
However, now sometimes communicating jobs are not finished correctly and I was able to track down the problem to being related to the change above. Basically, the wrapper script hangs when trying to stop the SMPD:
$ tail Job32.log
[1]2017-03-09 10:56:19 | About to exit with code: 0
[3]2017-03-09 10:56:19 | dctEvaluateFunctionArray calling: iExitFunction with args
[0]2017-03-09 10:56:19 | dctEvaluateFunctionArray calling: iExitFunction with args
[3]2017-03-09 10:56:19 | About to exit MATLAB normally
[0]2017-03-09 10:56:19 | About to exit MATLAB normally
[3]2017-03-09 10:56:19 | About to exit with code: 0
[0]2017-03-09 10:56:19 | About to exit with code: 0
Stopping SMPD ...
srun --ntasks-per-node=1 --ntasks=3 /cm/shared/uniol/software/MATLAB/2016b/bin/mw_smpd -shutdown -phrase MATLAB -port 27223
srun: Job step creation temporarily disabled, retrying
This happens whenever a single node has allocated only a single CPU/core (we use select/cons_res with CR_CPU_MEMORY). In that case the srun running in the background is preventing the srun for the SMPD-shutdown to allocate ressources.
I can think of of only one way to resolve this problem using OverSubscribe (which we currently have turned off). Is there another way? The JobWrapper script we use is attached.

採用された回答

Stefan Harfst
Stefan Harfst 2017 年 3 月 10 日
found a solution:
add the options --overcommit and --gres=none (in case the use of GRESes is configured in communicatingSubmitFcn.m) to the two srun commands in the communicatingJobWrapper.sh script. Eg. for shutdown:
srun --overcommit --gres=none --ntasks-per-node=1 --ntasks=${SLURM_JOB_NUM_NODES} ${FULL_SMPD} -shutdown -phrase MATLAB -port ${SMPD_PORT}
  2 件のコメント
Brian
Brian 2022 年 8 月 25 日
This thread is 5 years old but I am experiencing this same issue as my organization's new HPC is using slurm (vs SGE) . I am running 2017b and unable to validate my cluster profile and the above edits to the SRUN commands are not resolving this behavior.
Matlab is not recieving a 'finished' signal even though the job goes CG and then falls off the queue.
Thanks for any further assistance.
Stefan Harfst
Stefan Harfst 2022 年 9 月 9 日
If the jobs are completing on the cluster but Matlab is not receiving the finished state, than you are facing a different problem I think. The problem we had was, that some Matlab jobs never terminated because the srun command to shutdown the SPMD server got stuck.

サインインしてコメントする。

その他の回答 (0 件)

カテゴリ

Help Center および File ExchangeThird-Party Cluster Configuration についてさらに検索

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by