Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Installing/emulating SLURM on an Ubuntu 16.04 desktop: slurmd fails to start

Edit

What I am really looking for is a way to emulate SLURM, something interactive and reasonably user-friendly that I can install.


Original post

I want to test drive some minimal examples with SLURM, and I am trying to install it all on a local machine with Ubuntu 16.04. I am following the most recent slurm install guide I could find, and I got as far as "start slurmd with sudo /etc/init.d/slurmd start".

[....] Starting slurmd (via systemctl): slurmd.serviceJob for slurmd.service failed because the control process exited with error code. See "systemctl status slurmd.service" and "journalctl -xe" for details.
 failed!

I do not know how to interpret the systemctl log:

● slurmd.service - Slurm node daemon
   Loaded: loaded (/lib/systemd/system/slurmd.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Thu 2017-10-26 22:49:27 EDT; 12s ago
  Process: 5951 ExecStart=/usr/sbin/slurmd $SLURMD_OPTIONS (code=exited, status=1/FAILURE)

Oct 26 22:49:27 Haggunenon systemd[1]: Starting Slurm node daemon...
Oct 26 22:49:27 Haggunenon systemd[1]: slurmd.service: Control process exited, code=exited status=1
Oct 26 22:49:27 Haggunenon systemd[1]: Failed to start Slurm node daemon.
Oct 26 22:49:27 Haggunenon systemd[1]: slurmd.service: Unit entered failed state.
Oct 26 22:49:27 Haggunenon systemd[1]: slurmd.service: Failed with result 'exit-code'.

lsb_release -a gives the following. (Yes, I know, KDE Neon is not exactly Ubuntu, strictly speaking.)

o LSB modules are available.
Distributor ID: neon
Description:    KDE neon User Edition 5.11
Release:        16.04
Codename:       xenial

Unlike the guide said, I used my own user name, wlandau, and I made sure to chown /var/lib/slurm-llnl and /var/run/slurm-llnl to me. Here is my /etc/slurm-llnl/slurm.conf.

# slurm.conf file generated by configurator.html.
# Put this file on all nodes of your cluster.
# See the slurm.conf man page for more information.
#
ControlMachine=linux0
#ControlAddr=
#BackupController=
#BackupAddr=
#
AuthType=auth/munge
CacheGroups=0
#CheckpointType=checkpoint/none
CryptoType=crypto/munge
#DisableRootJobs=NO
#EnforcePartLimits=NO
#Epilog=
#EpilogSlurmctld=
#FirstJobId=1
#MaxJobId=999999
#GresTypes=
#GroupUpdateForce=0
#GroupUpdateTime=600
#JobCheckpointDir=/var/lib/slurm-llnl/checkpoint
#JobCredentialPrivateKey=
#JobCredentialPublicCertificate=
#JobFileAppend=0
#JobRequeue=1
#JobSubmitPlugins=1
#KillOnBadExit=0
#LaunchType=launch/slurm
#Licenses=foo*4,bar
#MailProg=/usr/bin/mail
#MaxJobCount=5000
#MaxStepCount=40000
#MaxTasksPerNode=128
MpiDefault=none
#MpiParams=ports=#-#
#PluginDir=
#PlugStackConfig=
#PrivateData=jobs
ProctrackType=proctrack/pgid
#Prolog=
#PrologFlags=
#PrologSlurmctld=
#PropagatePrioProcess=0
#PropagateResourceLimits=
#PropagateResourceLimitsExcept=
#RebootProgram=
ReturnToService=1
#SallocDefaultCommand=
SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/lib/slurm-llnl/slurmd
SlurmUser=wlandau
#SlurmdUser=root
#SrunEpilog=
#SrunProlog=
StateSaveLocation=/var/lib/slurm-llnl/slurmctld
SwitchType=switch/none
#TaskEpilog=
TaskPlugin=task/none
#TaskPluginParam=
#TaskProlog=
#TopologyPlugin=topology/tree
#TmpFS=/tmp
#TrackWCKey=no
#TreeWidth=
#UnkillableStepProgram=
#UsePAM=0
#
#
# TIMERS
#BatchStartTimeout=10
#CompleteWait=0
#EpilogMsgTime=2000
#GetEnvTimeout=2
#HealthCheckInterval=0
#HealthCheckProgram=
InactiveLimit=0
KillWait=30
#MessageTimeout=10
#ResvOverRun=0
MinJobAge=300
#OverTimeLimit=0
SlurmctldTimeout=120
SlurmdTimeout=300
#UnkillableStepTimeout=60
#VSizeFactor=0
Waittime=0
#
#
# SCHEDULING
#DefMemPerCPU=0
FastSchedule=1
#MaxMemPerCPU=0
#SchedulerRootFilter=1
#SchedulerTimeSlice=30
SchedulerType=sched/backfill
SchedulerPort=7321
SelectType=select/linear
#SelectTypeParameters=
#
#
# JOB PRIORITY
#PriorityFlags=
#PriorityType=priority/basic
#PriorityDecayHalfLife=
#PriorityCalcPeriod=
#PriorityFavorSmall=
#PriorityMaxAge=
#PriorityUsageResetPeriod=
#PriorityWeightAge=
#PriorityWeightFairshare=
#PriorityWeightJobSize=
#PriorityWeightPartition=
#PriorityWeightQOS=
#
#
# LOGGING AND ACCOUNTING
#AccountingStorageEnforce=0
#AccountingStorageHost=
#AccountingStorageLoc=
#AccountingStoragePass=
#AccountingStoragePort=
AccountingStorageType=accounting_storage/none
#AccountingStorageUser=
AccountingStoreJobComment=YES
ClusterName=cluster
#DebugFlags=
#JobCompHost=
#JobCompLoc=
#JobCompPass=
#JobCompPort=
JobCompType=jobcomp/none
#JobCompUser=
#JobContainerPlugin=job_container/none
JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
SlurmctldDebug=3
SlurmctldLogFile=/var/log/slurm-llnl/slurmctld.log
SlurmdDebug=3
SlurmdLogFile=/var/log/slurm-llnl/slurmd.log
#SlurmSchedLogFile=
#SlurmSchedLogLevel=
#
#
# POWER SAVE SUPPORT FOR IDLE NODES (optional)
#SuspendProgram=
#ResumeProgram=
#SuspendTimeout=
#ResumeTimeout=
#ResumeRate=
#SuspendExcNodes=
#SuspendExcParts=
#SuspendRate=
#SuspendTime=
#
#
# COMPUTE NODES
NodeName=linux[1-32] CPUs=1 State=UNKNOWN
PartitionName=debug Nodes=linux[1-32] Default=YES MaxTime=INFINITE State=UP

Follow-up

After rewriting my slurm.conf with the help of @damienfrancois, slurmd starts now. But unfortunately, sinfo hangs when I call it, and I get the same error message as before.

$ sudo /etc/init.d/slurmctld stop
[ ok ] Stopping slurmctld (via systemctl): slurmctld.service.
$ sudo /etc/init.d/slurmctld start
[ ok ] Starting slurmctld (via systemctl): slurmctld.service.
$ sinfo
slurm_load_partitions: Unable to contact slurm controller (connect failure)
$ slurmd -Dvvv
slurmd: fatal: Frontend not configured correctly in slurm.conf.  See man slurm.conf look for frontendname.

Then I tried restarting the daemons, and slurmd failed to start all over again.

$ sudo /etc/init.d/slurmctld start
[....] Starting slurmd (via systemctl): slurmd.serviceJob for slurmd.service failed because the control process exited with error code. See "systemctl status slurmd.service" and "journalctl -xe" for details.
 failed!
like image 970
landau Avatar asked Oct 27 '17 03:10

landau


People also ask

Where do I put the Slurm file?

Here is my /etc/slurm-llnl/slurm.conf. # slurm.conf file generated by configurator.html. # Put this file on all nodes of your cluster.

What is Slurm and how does it work?

As a cluster workload manager, Slurm has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allocated nodes.

What is the difference between compute nodes and Slurm controllers?

Compute nodes are DNS resolvable. The slurm controller node (slurm-ctrl) does not need to be a physical piece of hardware. A VM is fine. However, this node will be used by users for compiling codes and as such it should have the same OS and libraries (such as CUDA) that exist on the compute nodes.

How do I test a slurmuser?

For testing, fill in root for SlurmUser. Make sure that the slurmd and slurmctld PID file path are the same as listed in the systemd file ( /lib/systemd/system/slurmd.service ). CgroupAutomount=yes CgroupReleaseAgentDir="/etc/slurm/cgroup" ConstrainCores=yes ConstrainDevices=yes ConstrainRAMSpace=yes


1 Answers

The value in front of ControlMachine has to match the output of hostname -s on the machine on which slurmctld starts. The same holds for NodeName ; it has to match the output of hostname -s on the machine on which slurmd starts. As you only have one machine and it appears to be called Haggunenon, the relevant lines in slurm.conf should be:

ControlMachine=Haggunenon
[...]
NodeName=Haggunenon CPUs=1 State=UNKNOWN

If you want to start several slurmd daemon to emulate a larger cluster, you will need to start slurmd with the -N option (but that requires that Slurm be built using the --enable-multiple-slurmd configure option)


UPDATE. Here is a walkthrough. I setup a VM with Vagrant and VirtualBox (vagrant init ubuntu/xenial64 ; vagrant up) and then after vagrant ssh, I ran the following:

ubuntu@ubuntu-xenial:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.3 LTS
Release:    16.04
Codename:   xenial
ubuntu@ubuntu-xenial:~$ sudo apt-get update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
[...]
Get:35 http://archive.ubuntu.com/ubuntu xenial-backports/universe Translation-en [3,060 B]
Fetched 23.6 MB in 4s (4,783 kB/s)
Reading package lists... Done
ubuntu@ubuntu-xenial:~$ sudo apt-get install munge libmunge2
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
  libmunge2 munge
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 102 kB of archives.
After this operation, 351 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 libmunge2 amd64 0.5.11-3ubuntu0.1 [18.4 kB]
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 munge amd64 0.5.11-3ubuntu0.1 [83.9 kB]
Fetched 102 kB in 0s (290 kB/s)
Selecting previously unselected package libmunge2.
(Reading database ... 57914 files and directories currently installed.)
Preparing to unpack .../libmunge2_0.5.11-3ubuntu0.1_amd64.deb ...
Unpacking libmunge2 (0.5.11-3ubuntu0.1) ...
Selecting previously unselected package munge.
Preparing to unpack .../munge_0.5.11-3ubuntu0.1_amd64.deb ...
Unpacking munge (0.5.11-3ubuntu0.1) ...
Processing triggers for libc-bin (2.23-0ubuntu9) ...
Processing triggers for man-db (2.7.5-1) ...
Processing triggers for systemd (229-4ubuntu21) ...
Processing triggers for ureadahead (0.100.0-19) ...
Setting up libmunge2 (0.5.11-3ubuntu0.1) ...
Setting up munge (0.5.11-3ubuntu0.1) ...
Generating a pseudo-random key using /dev/urandom completed.
Please refer to /usr/share/doc/munge/README.Debian for instructions to generate more secure key.
Processing triggers for libc-bin (2.23-0ubuntu9) ...
Processing triggers for systemd (229-4ubuntu21) ...
Processing triggers for ureadahead (0.100.0-19) ...
ubuntu@ubuntu-xenial:~$ sudo apt-get install slurm-wlm slurm-wlm-basic-plugins
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  fontconfig fontconfig-config fonts-dejavu-core freeipmi-common libcairo2 libdatrie1 libdbi1 libfontconfig1 libfreeipmi16 libgraphite2-3 
[...]
  python-minimal python2.7 python2.7-minimal slurm-client slurm-wlm slurm-wlm-basic-plugins slurmctld slurmd
0 upgraded, 43 newly installed, 0 to remove and 0 not upgraded.
Need to get 20.8 MB of archives.
After this operation, 87.3 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 fonts-dejavu-core all 2.35-1 [1,039 kB]
[...]
Get:43 http://archive.ubuntu.com/ubuntu xenial/universe amd64 slurm-wlm amd64 15.08.7-1build1 [6,482 B]
Fetched 20.8 MB in 3s (5,274 kB/s)
Extracting templates from packages: 100%
Selecting previously unselected package fonts-dejavu-core.
(Reading database ... 57952 files and directories currently installed.)
[...]
Processing triggers for libc-bin (2.23-0ubuntu9) ...
Processing triggers for systemd (229-4ubuntu21) ...
Processing triggers for ureadahead (0.100.0-19) ...
ubuntu@ubuntu-xenial:~$ sudo vim /etc/slurm-llnl/slurm.conf
ubuntu@ubuntu-xenial:~$ grep -v \# /etc/slurm-llnl/slurm.conf
ControlMachine=ubuntu-xenial
AuthType=auth/munge
CacheGroups=0
CryptoType=crypto/munge
MpiDefault=none
ProctrackType=proctrack/pgid
ReturnToService=1
SlurmctldPidFile=/var/run/slurm-llnl/slurmctld.pid
SlurmctldPort=6817
SlurmdPidFile=/var/run/slurm-llnl/slurmd.pid
SlurmdPort=6818
SlurmdSpoolDir=/var/lib/slurm-llnl/slurmd
SlurmUser=ubuntu
StateSaveLocation=/var/lib/slurm-llnl/slurmctld
SwitchType=switch/none
TaskPlugin=task/none
InactiveLimit=0
KillWait=30
MinJobAge=300
SlurmctldTimeout=120
SlurmdTimeout=300
Waittime=0
FastSchedule=1
SchedulerType=sched/backfill
SchedulerPort=7321
SelectType=select/linear
AccountingStorageType=accounting_storage/none
AccountingStoreJobComment=YES
ClusterName=cluster
JobCompType=jobcomp/none
JobAcctGatherFrequency=30
JobAcctGatherType=jobacct_gather/none
SlurmctldDebug=3
SlurmctldLogFile=/var/log/slurm-llnl/slurmctld.log
SlurmdDebug=3
SlurmdLogFile=/var/log/slurm-llnl/slurmd.log
NodeName=ubuntu-xenial CPUs=1 State=UNKNOWN
PartitionName=debug Nodes=ubuntu-xenial Default=YES MaxTime=INFINITE State=UP
ubuntu@ubuntu-xenial:~$ sudo chown ubuntu /var/log/slurm-llnl
ubuntu@ubuntu-xenial:~$ sudo chown ubuntu /var/lib/slurm-llnl/slurmctld
ubuntu@ubuntu-xenial:~$ sudo chown ubuntu /var/run/slurm-llnl
ubuntu@ubuntu-xenial:~$ sudo /etc/init.d/slurmctld start
[ ok ] Starting slurmctld (via systemctl): slurmctld.service.
ubuntu@ubuntu-xenial:~$ sudo /etc/init.d/slurmd start
[ ok ] Starting slurmd (via systemctl): slurmd.service.

And in the end, it gives me the expected result:

ubuntu@ubuntu-xenial:~$ sinfo
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
debug*       up   infinite      1   idle ubuntu-denial

If following the exact steps here does not help, try running:

sudo slurmctld -Dvvv
sudo slurmd -Dvvv

The messages should be explicit enough.

like image 107
damienfrancois Avatar answered Oct 19 '22 08:10

damienfrancois