Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to create a Linux cluster for running physics simulations in java?

I am developing a scientific application used to perform physical simulations. The algorithms used are O(n3), so for a large set of data it takes a very long time to process. The application runs a simulation in around 17 minutes, and I have to run around 25,000 simulations. That is around one year of processing time.

The good news is that the simulations are completely independent from each other, so I can easily change the program to distribute the work among multiple computers.

There are multiple solutions I can see to implement this:

  • Get a multi-core computer and distribute the work among all the cores. Not enough for what I need to do.
  • Write an application that connects to multiple "processing" servers and distribute the load among them.
  • Get a cluster of cheap linux computers, and have the program treat everything as a single entity.

Option number 2 is relatively easy to implement, so I don't look so much for suggestions for how to implement this (Can be done just by writing a program that waits on a given port for the parameters, processes the values and returns the result as a serialized file). That would be a good example of Grid Computing.

However, I wonder at the possibilities of the last option, a traditional cluster. How difficult is to run a Java program in a linux grid? Will all the separate computers be treated as a single computer with multiple cores, making it thus easy to adapt the program? Is there any good pointers to resources that would allow me to get started? Or I am making this over-complicated and I am better off with option number 2?

EDIT: As extra info, I am interested on how to implement something like described in this article from Wired Magazine: Scientific replaced a supercomputer with a Playstation 3 linux cluster. Definitively number two sounds like the way to go... but the coolness factor.

EDIT 2: The calculation is very CPU-Bound. Basically there is a lot of operations on large matrixes, such as inverse and multiplication. I tried to look for better algorithms for these operations but so far I've found that the operations I need are 0(n3) (In libraries that are normally available). The data set is large (for such operations), but it is created on the client based on the input parameters.


I see now that I had a misunderstanding on how a computer cluster under linux worked. I had the assumption that it would work in such a way that it would just appear that you had all the processors in all computers available, just as if you had a computer with multiple cores, but that doesn't seem to be the case. It seems that all these supercomputers work by having nodes that execute tasks distributed by some central entity, and that there is several different libraries and software packages that allow to perform this distribution easily.

So the question really becomes, as there is no such thing as number 3, into: What is the best way to create a clustered java application?

like image 678
Mario Ortegón Avatar asked Dec 02 '08 10:12

Mario Ortegón


1 Answers

I would very highly recommend the Java Parallel Processing Framework especially since your computations are already independant. I did a good bit of work with this undergraduate and it works very well. The work of doing the implementation is already done for you so I think this is a good way to achieve the goal in "number 2."

http://www.jppf.org/

like image 169
BobbyShaftoe Avatar answered Oct 22 '22 10:10

BobbyShaftoe