Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to implement multithreaded access to file-based queue in bash script

I have a bit of a problem with designing a multiprocessed bash script that goes trough websites, follows found links and does some processing on every new page (it actually gathers email addresses but that's an unimportant detail to the problem).

The script is supposed to work like this:

  1. Downloads a page
  2. Parses out all links and adds them to queue
  3. Does some unimportant processing
  4. Pops an URL from queue and starts from

That itself would be quite simple to program, the problem arrises from two restrictions and a feature the script needs to have.

  • The script must not process one URL twice
  • The script must be able to process n (supplied as argument) pages at once
  • The script must be POSIX complient (with the exception of curl) -> so no fancy locks

Now, I've managed to come up with an implementation that uses two files for queues one of which stores all URLs that had already been processed, and other one URLs that had been found but not yet processed.

The main proces simply spawns a bunch of child processes that all share the queue files and (in a loop until URLs-to-be-processed-queue is empty) pops top a URL from URLs-to-be-processed-queue, process the page, try to add every newly found link to URLs-already-processed-queue and if it succeeds (the URL is not already there) add it to URLs-to-be-processed-queue as well.

The issue lies in the fact that you can't (AFAIK) make the queue-file operations atomic and therefore locking is necessary. And locking in POSIX complient way is ... terror... slow terror.

The way I do it is following:

#Pops first element from a file ($1) and prints it to stdout; if file emepty print out empty return 1
fuPop(){
if [ -s "$1" ]; then
        sed -nr '1p' "$1"
        sed -ir '1d' "$1"
        return 0
else
        return 1
fi
}

#Appends line ($1) to a file ($2) and return 0 if it's not in it yet; if it, is just return 1
fuAppend(){
if grep -Fxq "$1" < "$2"; then
        return 1
else
        echo "$1" >> "$2"
        return 0
fi
}

#There're multiple processes running this function.
prcsPages(){
while [ -s "$todoLinks" ]; do
        luAckLock "$linksLock"
        linkToProcess="$(fuPop "$todoLinks")"
        luUnlock "$linksLock"

        prcsPage "$linkToProcess"
        ...
done
...
}


#The prcsPage downloads it, does some magic and than calls prcsNewLinks and prcsNewEmails that both get list of new emails / new urls in $1
#$doneEmails, ..., contain file path, $mailLock, ..., contain dir path

prcsNewEmails(){
luAckLock "$mailsLock"
for newEmail in $1; do
        if fuAppend "$newEmail" "$doneEmails"; then
                echo "$newEmail"
        fi
done
luUnlock "$mailsLock"
}

prcsNewLinks(){
luAckLock "$linksLock"
for newLink in $1; do
        if fuAppend "$newLink" "$doneLinks"; then
                fuAppend "$newLink" "$todoLinks"
        fi
done
luUnlock "$linksLock"
}

The problem is that my implementation is slow (like really slow), almost so slow that it doesn't make sense to use more than 2 10 (decreasing lock waiting help a great deal) child processes. You can actually disable the locks (just comment out the luAckLock and luUnlock bits) and it works quite ok (and much faster) but there're race conditions every once in a while with the seds -i and it just doesn't feel right.

The worst(as I see it) is locking in prcsNewLinks as it takes quite a lot time (most of time-run basically) and practically prevents other processes from starting to process a new page (as it requires poping new URL from (currently locked) $todoLinks queue).

Now my question is, how to do it better, faster, and nicer?

The whole script is here (it contains some signal magic, a lot of debug outputs, and not that good code generally).

BTW: Yes, you're right, doing this in bash - and what's more in POSIX compliant way - is insane! But it's university assignment so I kinda have to do it

//Though I feel it's not really expected of me to resolve these problems (as the race conditions arise more frequently only when having 25+ threads which is probably not something a sane person would test).


Notes to the code:

  1. Yes, the wait should have (and already has) a random time. The code shared here was just a proof of concept written during real analysis lecture.
  2. Yes, the number of debug ouptuts and their formatting is terrible and there should be standalone logging function. That's, however, not point of my problem.
like image 536
Petrroll Avatar asked Nov 01 '22 06:11

Petrroll


1 Answers

First of all, do you need to implement your own HTML/HTTP spidering? Why not let wget or curl recurse through a site for you?

You could abuse the filesystem as a database, and have your queues be directories of one-line files. (or empty files where the filename is the data). That would give you producer-consumer locking, where producers touch a file, and consumers move it from the incoming to the processing/done directory.

The beauty of this is that multiple threads touching the same file Just Works. The desired result is the url appearing once in the "incoming" list, and that's what you get when multiple threads create files with the same name. Since you want de-duplication, you don't need locking when writing to the incoming list.

1) Downloads a page

2) Parses out all links and adds them to queue

For each link found,

grep -ql "$url" already_checked || : > "incoming/$url"

or

[[ -e "done/$url" ]] || : > "incoming/$url"

3) Does some unimportant processing

4) Pops an URL from queue and starts from 1)

# mostly untested, you might have to debug / tweak this
local inc=( incoming/* )
# edit: this can make threads exit sooner than desired.
# See the comments for some ideas on how to make threads wait for new work
while [[ $inc != "incoming/*" ]]; do
    # $inc is shorthand for "${inc[0]}", the first array entry
    mv "$inc" "done/" || { rm -f "$inc"; continue; } # another thread got that link, or that url already exists in done/
    url=${inc#incoming/}
    download "$url";
    for newurl in $(link_scan "$url"); do
        [[ -e "done/$newurl" ]] || : > "incoming/$newurl"
    done
    process "$url"
    inc=( incoming/* )
done

edit: encoding URLs into strings that don't contain / is left as an exercise for the reader. Although probably urlencoding / to %2F would work well enough.

I was thinking of moving URLs to a "processing" list per thread, but actually if you don't need to be able to resume from interruption, your "done" list can instead be a "queued & done" list. I don't think it's actually ever useful to mv "$url" "threadqueue.$$/" or something.

The "done/" directory will get pretty big, and start to slow down with maybe 10k files, depending on what filesystem you use. It's probably more efficient to maintain the "done" list as a file of one url per line, or a database if there's a database CLI interface that's fast for single commands.

Maintaining the done list as a file isn't bad, because you never need to remove entries. You can probably get away without locking it, even for multiple processes appending it. (I'm not sure what happens if thread B writes data at EOF between thread A opening the file and thread A doing a write. Thread A's file position might be the old EOF, in which case it would overwrite thread B's entry, or worse, overwrite only part of it. If you do need locking, maybe flock(1) would be useful. It gets a lock, then executes the commands you pass as args.)

If broken files from lack of write locking doesn't happen, then you might not need write locking. The occasional duplicate entry in the "done" list will be a tiny slowdown compared to having to lock for every check/append.

If you need strictly-correct avoidance of downloading the same URL multiple times, you need readers to wait for the writer to finish. If you can just sort -u the list of emails at the end, it's not a disaster for a reader to read an old copy while the list is being appended. Then writers only need to lock each other out, and readers can just read the file. If they hit EOF before a writer manages to write a new entry, then so be it.

I'm not sure whether it matters if a a thread adds entries to the "done" list before or after they remove them from the incoming list, as long as they add them to "done" before processing. I was thinking that one way or the other might make races more likely to cause duplicate done entries, and less likely to make duplicate downloads / processing, but I'm not sure.

like image 169
Peter Cordes Avatar answered Nov 15 '22 07:11

Peter Cordes