Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Processing CSV file in Clojure in parallel

I have a large CSV file that contains independent items that take a fair bit of effort to process. I'd like to be able to process each line item in parallel. I found a sample piece of code for processing a CSV file on SO here:

Newbie transforming CSV files in Clojure

The code is:

(use '(clojure.contrib duck-streams str-utils))                 ;;'
(with-out-writer "coords.txt"
  (doseq [line (read-lines "coords.csv")]
    (let [[x y z p] (re-split #"," line)]
      (println (str-join \space [p x y z])))))

This was able to print out data from my CSV file which was great - but it only used one CPU. I've tried various different things, ending up with:

(pmap println (read-lines "foo"))

This works okay in interactive mode but does nothing when running from the command line. From a conversation on IRC, this is because stdout isn't available by default to threads.

Really what I'm looking for is a way to idiomatically apply a function to each line of the CSV file and do so in parallel. I'd also like to print some results to stdout during testing if at all possible.

Any ideas?

like image 853
PeterM Avatar asked Apr 04 '11 03:04

PeterM


3 Answers

If you want the results in the output be in the same order as in the input, then printing from pmap might not be a good idea. I would recommend creating a (lazy) sequence of the input lines pmap over that and then print the result of pmap. Something like this should work:

(dorun (map println (pmap expensive-computation (read-lines "coords.csv"))))
like image 135
subsub Avatar answered Nov 07 '22 23:11

subsub


If you want to do this at speed you might want to look at this article on how Alex Osborne solved the Widefinder 2 challenge posed by Tim Bray. Alex goes into all aspects of parsing, processing and collecting the results (in the Widefinder 2 case the file is a very large Apache log). The actual code used is here.

like image 43
mac Avatar answered Nov 08 '22 00:11

mac


I would be extremely surprised if hat code can be sped up by using more cores. I'm 99% certain that the actual speed limit here is the file I/O, which should be a couple orders of magnitude slower than any single core you can throw at the problem.

And that's aside from the overhead you'll introduce when splitting these very minimal tasks over multiple CPUs. pmap isn't exactly free.

If you're sure that disk IO isn't going to be a problem and you've got a lot of CSV parsing to do, simply parsing multiple files in their own threads is going to gain you a lot more for a lot less effort.

like image 38
Joost Diepenmaat Avatar answered Nov 08 '22 00:11

Joost Diepenmaat