Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why can't I use job control in a bash script?

In this answer to another question, I was told that

in scripts you don't have job control (and trying to turn it on is stupid)

This is the first time I've heard this, and I've pored over the bash.info section on Job Control (chapter 7), finding no mention of either of these assertions. [Update: The man page is a little better, mentioning 'typical' use, default settings, and terminal I/O, but no real reason why job control is particularly ill-advised for scripts.]

So why doesn't script-based job-control work, and what makes it a bad practice (aka 'stupid')?

Edit: The script in question starts a background process, starts a second background process, then attempts to put the first process back into the foreground so that it has normal terminal I/O (as if run directly), which can then be redirected from outside the script. Can't do that to a background process.

As noted by the accepted answer to the other question, there exist other scripts that solve that particular problem without attempting job control. Fine. And the lambasted script uses a hard-coded job number — Obviously bad. But I'm trying to understand whether job control is a fundamentally doomed approach. It still seems like maybe it could work...

like image 200
system PAUSE Avatar asked Mar 27 '09 15:03

system PAUSE


People also ask

What is job control in bash?

Job control refers to the ability to selectively stop (suspend) the execution of processes and continue (resume) their execution at a later point. A user typically employs this facility via an interactive interface supplied jointly by the system's terminal driver and Bash. The shell associates a job with each pipeline.

What does no job control in this shell mean?

Without job control, you have the ability to put a job in the background by adding & to the command line. And that's about all the control you have. With job control, you can additionally: Suspend a running foreground job with Ctrl Z. Resume a suspended job in the foreground with fg.

What is CTRL C in bash?

When you hit Ctrl + c , the line discipline of your terminal sends SIGINT to processes in the foreground process group. Bash, when job control is disabled, runs everything in the same process group as the bash process itself. Job control is disabled by default when Bash interprets a script.


2 Answers

What he meant is that job control is by default turned off in non-interactive mode (i.e. in a script.)

From the bash man page:

JOB CONTROL        Job  control refers to the ability to selectively stop (suspend)        the execution of processes and continue (resume) their execution at a        later point.        A user typically employs this facility via an interactive interface        supplied jointly by the system’s terminal driver and bash. 

and

   set [--abefhkmnptuvxBCHP] [-o option] [arg ...]       ...       -m      Monitor mode.  Job control is enabled.  This option is on by               default for interactive shells on systems that support it (see               JOB CONTROL above).  Background processes run in a separate               process group and a line containing their exit status  is               printed  upon  their completion. 

When he said "is stupid" he meant that not only:

  1. is job control meant mostly for facilitating interactive control (whereas a script can work directly with the pid's), but also
  2. I quote his original answer, ... relies on the fact that you didn't start any other jobs previously in the script which is a bad assumption to make. Which is quite correct.

UPDATE

In answer to your comment: yes, nobody will stop you from using job control in your bash script -- there is no hard case for forcefully disabling set -m (i.e. yes, job control from the script will work if you want it to.) Remember that in the end, especially in scripting, there always are more than one way to skin a cat, but some ways are more portable, more reliable, make it simpler to handle error cases, parse the output, etc.

You particular circumstances may or may not warrant a way different from what lhunath (and other users) deem "best practices".

like image 195
vladr Avatar answered Sep 26 '22 21:09

vladr


Job control with bg and fg is useful only in interactive shells. But & in conjunction with wait is useful in scripts too.

On multiprocessor systems spawning background jobs can greatly improve the script's performance, e.g. in build scripts where you want to start at least one compiler per CPU, or process images using ImageMagick tools parallely etc.

The following example runs up to 8 parallel gcc's to compile all source files in an array:

#!bash ... for ((i = 0, end=${#sourcefiles[@]}; i < end;)); do     for ((cpu_num = 0; cpu_num < 8; cpu_num++, i++)); do         if ((i < end)); then gcc ${sourcefiles[$i]} & fi     done     wait done 

There is nothing "stupid" about this. But you'll require the wait command, which waits for all background jobs before the script continues. The PID of the last background job is stored in the $! variable, so you may also wait ${!}. Note also the nice command.

Sometimes such code is useful in makefiles:

buildall:     for cpp_file in *.cpp; do gcc -c $$cpp_file & done; wait 

This gives much finer control than make -j.

Note that & is a line terminator like ; (write command& not command&;).

Hope this helps.

like image 22
Andreas Spindler Avatar answered Sep 25 '22 21:09

Andreas Spindler