I have a recursive divide and conquer algorithm that requires two computationally intensive base case tasks before starting the dividing. The initial base cases are independent tasks, so I want to do them in parallel. After the base cases, the dividing runs the same tasks with different inputs between 0 and 1, and based on the output decides whether or not to split again. I got the base cases to work by creating a task wrapper object that fakes the recursion, but this feels like a kludge, as follows:
public static void doSomething () {
ForkJoinPool pool = new ForkJoinPool();
private ArrayList<Object> al = new ArrayList<Object>();
TaskWrapper tw = new TaskWrapper(true,-1);
al.addAll(pool.invoke(tw));
}
@SuppressWarnings("serial")
public static class TaskWrapper extends RecursiveTask<ArrayList<Object>> {
private ArrayList<Object> al = new ArrayList<Object>();
private boolean arg;
private double input;
private Object out;
TaskWrapper(boolean ar, double in){
arg = ar;
input = in;
}
@Override
public ArrayList<Object> compute() {
if (arg == false) {
out = new Object(runIntensiveTask(input));
al.add(out);
}
else {
// Right Base Case
TaskWrapper right = new TaskWrapper(false, 1);
right.fork();
// Left Base Case
TaskWrapper left = new TaskWrapper(false, 0);
al.addAll(left.compute());
// Join with Right result
al.addAll(right.join());
}
return al;
}
}
Is there a simpler way to accomplish the same thing?
This is my first StackOverflow post, so please forgive any formatting or protocol errors. Thank you for the help.
ForkJoinPool It is an implementation of the ExecutorService that manages worker threads and provides us with tools to get information about the thread pool state and performance. Worker threads can execute only one task at a time, but the ForkJoinPool doesn't create a separate thread for every single subtask.
The fork/join framework is an implementation of the ExecutorService interface that helps you take advantage of multiple processors. It is designed for work that can be broken into smaller pieces recursively. The goal is to use all the available processing power to enhance the performance of your application.
The Fork/Join framework in Java 7 is an implementation of the Divide and Conquer algorithm, in which a central ForkJoinPool executes branching ForkJoinTasks. ExecutorService is an Executor that provides methods to manage the progress-tracking and termination of asynchronous tasks.
ForkJoinPool acts recursively, unlike Executor threads, which splits the task and submits smaller chunks to worker Threads. ForkJoinPool takes a big task, splits it into smaller tasks, and those smaller tasks split themselves again into subtasks until each subtask is atomic or not divisible. So it works recursively.
It never surprises me the way people use this framework. In a nutshell: The framework was devised to process balanced tree structures (D.A.G) When you use it for other than that, there are problems. You are not processing a balanced tree.
What Java needs is a general-purpose parallel engine, but what it has is this framework. So, you're doing the best you can. If it works, good. I can't see any alternative in Java7 but i"ll look more deeply. I would like to know how this performs under a profiler (say visualVM.) Since I don't have the intensiveTask class there is no way for me to proceed. The join() in Java7 creates "continuation threads" which can be really impact an application. Let us know what the profiler says.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With