Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Multithreading on SLURM

I have a Perl script that forks using the Parallel::ForkManager module.

To my knowledge, if I fork 32 child processes and ask the SLURM scheduler to run the job on 4 nodes, 8 processors per node, the code will execute each child process on every core.

Someone in my lab said that if I run a job on multiple nodes that the other nodes are not used, and I'm wasting time and money. Is this accurate?

If I use a script that forks am I limited to one node with SLURM?

like image 217
DolphinGenomePyramids Avatar asked Jul 08 '15 17:07

DolphinGenomePyramids


1 Answers

As far as I know Parallel::ForkManager doesn't make use of MPI, so if you're using mpirun I don't see how it's going to communicate across nodes. A simple test is to have each child output hostname.

One thing that commonly happens with non-MPI software launched with mpirun is that you duplicate all your effort across all nodes, so that they are all doing the exact same thing instead of sharing the work. If you use Parallel::MPI it should work just fine.

like image 148
Mark Avatar answered Sep 29 '22 00:09

Mark