Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to get original location of script used for SLURM job?

I'm starting the SLURM job with script and script must work depending on it's location which is obtained inside of script itself with SCRIPT_LOCATION=$(realpath $0). But SLURM copies script to slurmd folder and starts job from there and it screws up further actions.

Are there any option to get location of script used for slurm job before it has been moved/copied?

Script is located in network shared folder /storage/software_folder/software_name/scripts/this_script.sh and it must to:

  1. get it's own location
  2. return the software_name folder
  3. copy the software_name folder to a local folder /node_folder on node
  4. run another script from copied folder /node_folder/software_name/scripts/launch.sh

My script is

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --partition=my_partition_name

# getting location of software_name 
SHARED_PATH=$(dirname $(dirname $(realpath $0)))
# separating the software_name from path
SOFTWARE_NAME=$(basename $SHARED_PATH)
# target location to copy project
LOCAL_SOFTWARE_FOLDER='/node_folder'
# corrected path for target
LOCAL_PATH=$LOCAL_SOFTWARE_FOLDER/$SOFTWARE_NAME

# Copying software folder from network storage to local
cp -r $SHARED_PATH $LOCAL_SOFTWARE_FOLDER
# running the script
sh $LOCAL_PATH/scripts/launch.sh

It runs perfectly, when I run it on the node itself (without using SLURM) via: sh /storage/software/scripts/this_script.sh.

In case of running it with SLURM as sbatch /storage/software/scripts/this_script.sh it is assigned to one of nodes, but:

  • before run it is copied to /var/spool/slurmd/job_number/slurm_script and it screws everything up since $(dirname $(dirname $(realpath $0))) returns /var/spool/slurmd

Is it possible to get original location (/storage/software_folder/software_name/) inside of script when it is started with SLURM?

P.S. All machines are running Fedora 30 (x64)

UPDATE 1

There was a suggestion to run as sbatch -D /storage/software_folder/software_name ./scripts/this_script.sh and use the SHARED_PATH="${SLURM_SUBMIT_DIR}" inside of script itself. But it raise the error sbatch: error: Unable to open file ./scripts/this_script.sh.

Also, I tried to use absolute paths: sbatch -D /storage/software_folder/software_name /storage/software_folder/software_name/scripts/this_script.sh. It tries to run, but:

  • in such case it uses specified folder for creating output file only
  • software still doesn't want to run
  • attempt to use echo "${SLURM_SUBMIT_DIR}" inside of script prints /home/username_who_started_script instead of /storage/software_folder/software_name

Any other suggestions?

UPDATE 2: Also tried to use #SBATCH --chdir=/storage/software_folder/software_name inside of script, but in such case echo "${SLURM_SUBMIT_DIR}" returns /home/username_who_started_scriptor / (if run as root)

UPDATE 3

Approach with ${SLURM_SUBMIT_DIR} worked only if task is ran as:

cd /storage/software_folder/software_name
sbatch ./scripts/this_script.sh

But it doesn't seem to be a proper solution. Are there any other ways?

SOLUTION

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --partition=my_partition_name

# check if script is started via SLURM or bash
# if with SLURM: there variable '$SLURM_JOB_ID' will exist
# `if [ -n $SLURM_JOB_ID ]` checks if $SLURM_JOB_ID is not an empty string
if [ -n $SLURM_JOB_ID ];  then
    # check the original location through scontrol and $SLURM_JOB_ID
    SCRIPT_PATH=$(scontrol show job $SLURM_JOBID | awk -F= '/Command=/{print $2}')
else
    # otherwise: started with bash. Get the real location.
    SCRIPT_PATH=$(realpath $0)
fi

# getting location of software_name 
SHARED_PATH=$(dirname $(dirname $(SCRIPT_PATH)))
# separating the software_name from path
SOFTWARE_NAME=$(basename $SHARED_PATH)
# target location to copy project
LOCAL_SOFTWARE_FOLDER='/node_folder'
# corrected path for target
LOCAL_PATH=$LOCAL_SOFTWARE_FOLDER/$SOFTWARE_NAME

# Copying software folder from network storage to local
cp -r $SHARED_PATH $LOCAL_SOFTWARE_FOLDER
# running the script
sh $LOCAL_PATH/scripts/launch.sh
like image 490
Araneus0390 Avatar asked Jul 10 '19 00:07

Araneus0390


People also ask

Where do you write Slurm scripts?

These scripts are also located at: /data/training/SLURM/, and can be copied from there. If you choose to copy one of these sample scripts, please make sure you understand what each #SBATCH directive before before using the script to submit your jobs.

How do I get my Slurm employment history?

Job information Information on all running and pending batch jobs managed by SLURM can be obtained from the SLURM command squeue . Note that information on completed jobs is only retained for a limited period. Information on jobs that ran in the past is via. sacct An example of the output squeue is shown below.

How do I submit a Slurm job script?

There are two ways of submitting a job to SLURM: Submit via a SLURM job script - create a bash script that includes directives to the SLURM scheduler. Submit via command-line options - provide directives to SLURM via command-line arguments.

How do I find the name of a node Slurm?

As for finding the name of the node running your job, this can be found in the environment variable SLURMD_NODENAME. The variable SLURM_NODELIST will give you a list of nodes allocated to a job (unless you run a job across multiple nodes, this will only contain one name).


2 Answers

You can get the initial (i.e. at submit time) location of the submission script from scontrol like this:

scontrol show job $SLURM_JOBID | awk -F= '/Command=/{print $2}'

So you can replace the realpath $0 part with the above. This will only work within a Slurm allocation of course. So if you want the script to work in any situation, you will need some logic like:

if [ -n $SLURM_JOB_ID ] ; then
THEPATH=$(scontrol show job $SLURM_JOBID | awk -F= '/Command=/{print $2}')
else
THEPATH=$(realpath $0)
fi

and then proceed with

SHARED_PATH=$(dirname $(dirname "${THEPATH}"))
like image 60
damienfrancois Avatar answered Oct 19 '22 08:10

damienfrancois


I had to do the same in an array job, the accepted answer from @damienfrancois works well for all jobs except the jobid which is same as ArrayJobId. Just piping awk command to head command would do the trick

scontrol show job $SLURM_JOBID | awk -F= '/Command=/{print $2}' | head -n 1
like image 31
Karthik Govindappa Avatar answered Oct 19 '22 08:10

Karthik Govindappa