Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Parallel computing in Octave on a single machine -- package and example

I would like to parallelize a for loop in Octave on a single machine (as opposed to a cluster). I asked a question about a parallel version of Octave a while ago parallel computing in octave

And the answer suggested that I download a parallel computing package, which I did. The package seems largely geared to cluster computing, but it did mention single machine parallel computing, but was not clear on how to run even a parallel loop.

I also found another question on SO about this, but I did not find a good answer for parallelizing loops in Octave: Running portions of a loop in parallel with Octave?

Does anyone know where I can find an example of running a for loop in parallel in Octave???

like image 662
db1234 Avatar asked May 09 '12 16:05

db1234


People also ask

What is parallel computing with example?

Shared memory parallel computers use multiple processors to access the same memory resources. Examples of shared memory parallel architecture are modern laptops, desktops, and smartphones. Distributed memory parallel computers use multiple processors, each with their own memory, connected over a network.

What are the four types of parallel computing?

There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism.

What is parallel computing explain the types of parallel computing?

Parallel computing refers to the process of executing several processors an application or computation simultaneously. Generally, it is a kind of computing architecture where the large problems break into independent, smaller, usually similar parts that can be processed in one go.

Which are the three major parallel computing platforms?

The three most common service categories are Infrastructure as as Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).


2 Answers

Octave loops are slow, slow, slow and you're far better off expressing things in terms of array-wise operations. Let's take the example of evaluating a simple trig function over a 2d domain, as in this 3d octave graphics example (but with a more realistic number of points for computation, as opposed to plotting):

vectorized.m:

tic()
x = -2:0.01:2;
y = -2:0.01:2;
[xx,yy] = meshgrid(x,y);
z = sin(xx.^2-yy.^2);
toc()

Converting it to for loops gives us forloops.m:

tic()
x = -2:0.01:2;
y = -2:0.01:2;
z = zeros(401,401);
for i=1:401
    for j=1:401
        lx = x(i);
        ly = y(j);
        z(i,j) = sin(lx^2 - ly^2);
    endfor        
endfor
toc()

Note that already the vectorized version "wins" in being simpler and clearer to read, but there's another important advantage, too; the timings are dramatically different:

$ octave --quiet vectorized.m 
Elapsed time is 0.02057 seconds.

$ octave --quiet forloops.m 
Elapsed time is 2.45772 seconds.

So if you were using for loops, and you had perfect parallelism with no overhead, you'd have to break this up onto 119 processors just to break even with the non-for-loop !

Don't get me wrong, parallelism is great, but first get things working efficiently in serial.

Almost all of octave's built-in functions are already vectorized in the sense that they operate equally well on scalars or entire arrays; so it's often easy to convert things to array operations instead of doing things element-by-element. For those times when it's not so easy, you'll generally see that there are utility functions (like meshgrid, which generates a 2d-grid from the cartesian product of 2 vectors) that already exist to help you.

like image 130
Jonathan Dursi Avatar answered Sep 21 '22 08:09

Jonathan Dursi


I am computing large number of RGB histograms. I need to use explicit loops to do it. Therefore computation of each histogram takes noticeable time. For this reason running the computations in parallel makes sense. In Octave there is an (experimental) function parcellfun written by Jaroslav Hajek that can be used to do it.

My original loop

histograms = zeros(size(files,2), bins^3);
  % calculate histogram for each image
  for c = 1 : size(files,2)
    I = imread(fullfile(dir, files{c}));
    h = myhistRGB(I, bins);
    histograms(c, :) = h(:); % change to 1D vector
  end

To use parcellfun, I need to refactor the body of my loop into a separate function.

function histogram = loadhistogramp(file)
  I = imread(fullfile('.', file));
  h = myhistRGB(I, 8);
  histogram = h(:); % change to 1D vector
end

then I can call it like this

histograms = parcellfun(8, @loadhistogramp, files);

I did a small benchmark on my computer. It is 4 physical cores with Intel HyperThreading enabled.

My original code

tic(); histograms2 = loadhistograms('images.txt', 8); toc();
warning: your version of GraphicsMagick limits images to 8 bits per pixel
Elapsed time is 107.515 seconds.

With parcellfun

octave:1> pkg load general; tic(); histograms = loadhistogramsp('images.txt', 8); toc();
parcellfun: 0/178 jobs donewarning: your version of GraphicsMagick limits images to 8 bits per pixel
warning: your version of GraphicsMagick limits images to 8 bits per pixel
warning: your version of GraphicsMagick limits images to 8 bits per pixel
warning: your version of GraphicsMagick limits images to 8 bits per pixel
warning: your version of GraphicsMagick limits images to 8 bits per pixel
warning: your version of GraphicsMagick limits images to 8 bits per pixel
warning: your version of GraphicsMagick limits images to 8 bits per pixel
warning: your version of GraphicsMagick limits images to 8 bits per pixel
parcellfun: 178/178 jobs done
Elapsed time is 29.02 seconds.

(The results from the parallel and serial version were the same (only transposed).

octave:6> sum(sum((histograms'.-histograms2).^2))
ans = 0

When I repeated this several times, the running times were pretty much the same all the time. The parallel version was running around 30 second (+- approx 2s) with both 4, 8 and also 16 subprocesses)

like image 31
user7610 Avatar answered Sep 20 '22 08:09

user7610