Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Forking / Multi-Threaded Processes | Bash

Tags:

bash

shell

fork

I would like to make a section of my code more efficient. I'm thinking of making it fork off into multiple processes and have them execute 50/100 times at once, instead of just once.

For example (pseudo):

for line in file; do  foo; foo2; foo3; done 

I would like this for loop to run multiple times. I know this can be done with forking. Would it look something like this?

while(x <= 50) parent(child pid) {    fork child() } child {    do     foo; foo2; foo3;     done    return child_pid() } 

Or am I thinking about this the wrong way?

Thanks!

like image 549
Greg Avatar asked Sep 21 '09 17:09

Greg


People also ask

Is forking the same as multithreading?

Fork is nothing but a new process that looks exactly like the old or the parent process but still it is a different process with different process ID and having it's own memory. Threads are light-weight process which have less overhead.

Why do we fork when we can thread?

Forking is much safer and more secure because each forked process runs in its own virtual address space. If one process crashes or has a buffer overrun, it does not affect any other process at all. Threads code is much harder to debug than fork.

Does fork create a new thread?

The fork() system call in UNIX causes creation of a new process. The new process (called the child process) is an exact copy of the calling process (called the parent process) except for the following: The child process has a unique process ID.

Is Bash multithreaded?

Parallel executes Bash scripts in parallel via a concept called multi-threading. This utility allows you to run different jobs per CPU instead of only one, cutting down on time to run a script.


1 Answers

In bash scripts (non-interactive) by default JOB CONTROL is disabled so you can't do the the commands: job, fg, and bg.

Here is what works well for me:

#!/bin/sh  set -m # Enable Job Control  for i in `seq 30`; do # start 30 jobs in parallel   sleep 3 & done  # Wait for all parallel jobs to finish while [ 1 ]; do fg 2> /dev/null; [ $? == 1 ] && break; done 

The last line uses "fg" to bring a background job into the foreground. It does this in a loop until fg returns 1 ($? == 1), which it does when there are no longer any more background jobs.

like image 185
Aleksandr Levchuk Avatar answered Oct 24 '22 04:10

Aleksandr Levchuk