Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

what would make a single task executor stop processing tasks?

I'm using a java.util.concurrent.ExecutorService that I obtained by calling Executors.newSingleThreadExecutor(). This ExecutorService can sometimes stop processing tasks, even though it has not been shutdown and continues to accept new tasks without throwing exceptions. Eventually, it builds up enough of a queue that my app shuts down with OutOfMemoryError exceptions.

The documentation seem to indicate that this single thread executor should survive task processing errors by firing up a new worker thread if necessary to replace one that has died. Am I missing something?

like image 259
rcalder816 Avatar asked Dec 05 '08 18:12

rcalder816


People also ask

What method creates an Executor that executes one task at a time?

To schedule a single task's execution after a fixed delay, use the scheduled() method of the ScheduledExecutorService. The scheduleAtFixedRate() method lets us run a task periodically after a fixed delay.

How do I stop being an ExecutorService?

Two different methods are provided for shutting down an ExecutorService. The shutdown() method will allow previously submitted tasks to execute before terminating, while the shutdownNow() method prevents waiting tasks from starting and attempts to stop currently executing tasks.

Is Executor submit blocking?

When the condition waiting methods are called the lock is released, so there will be no deadlock. Right, this ExecutorService blocks tasks on submission without blocking caller thread. Job just getting submitted and will be processed asynchronously when there will be enough system resources for it.

Which method can cancel the future task triggered by submit of ExecutorService?

You can cancel the task submitted to ExecutorService by simply calling the cancel method on the future submitted when the task is submitted.


3 Answers

It sounds like you have two different issues:

1) You're over-feeding the work queue. You can't just keep stuffing new tasks into the queue, with no regard for the consumption rate of the task executors. You need to figure out some logic for knowing when you to block new additions to the work queue.

2) Any uncaught exception in a task's thread can completely kill the thread. When that happens, the ExecutorService spins up a new thread to replace it. But that doesn't mean you can ignore whatever problem is causing the thread to die in the first place! Find those uncaught exceptions and catch them!

This is just a hunch (cuz there's not enough info in your post to know otherwise), but I don't think your problem is that the task executor stops processing tasks. My guess is that it just doesn't process tasks as fast as you're creating them. (And the fact that your tasks sometimes die prematurely is probably orthogonal to the problem.)

At least, that's been my experience working with thread pools and task executors.


Okay, here's another possibility that sounds feasible based on your comment (that everything will run smoothly for hours until suddenly coming to a crashing halt)...

You might have a rare deadlock between your task threads. Most of the time, you get lucky, and the deadlock doesn't manifest itself. But occasionally, two or more of your task threads get into a state where they're waiting for the release of a lock held by the other thread. At that point, no more task processing can take place, and your work queue will pile up until you get the OutOfMemoryError.

Here's how I'd diagnose that problem:

Eliminate ALL shared state between your task threads. At first, this might require each task thread making a defensive copy of all shared data structures it requires. Once you've done that, it should be completely impossible to experience a deadlock.

At this point, gradually reintroduced the shared data structures, one at a time (with appropriate synchronization). Re-run your application after each tiny modification to test for the deadlock. When you get that crashing situation again, take a close look at the access patterns for the shared resource and determine whether you really need to share it.

As for me, whenever I write code that processes parallel tasks with thread pools and executors, I always try to eliminate ALL shared state between those tasks. As far as the application is concerned, they may as well be completely autonomous applications. Hunting down deadlocks is a drag, and in my experience, the best way to eliminate deadlocks is for each thread to have its own local state rather than sharing any state with other task threads.

Good luck!

like image 165
benjismith Avatar answered Oct 28 '22 21:10

benjismith


My guess would be that your tasks are blocking indefinitely, rather than dying. Do you have evidence, such as a log statement at the end of your task, suggest that your tasks are successfully completing?

This could be a deadlock, or an interaction with some external process that is blocking.

like image 25
erickson Avatar answered Oct 28 '22 20:10

erickson


Although you don't leave enough detail to be sure, the first thing I'd try is to have your tasks catch "Exception" at the top level and log the message.

I know it doesn't seem right, but occasionally (depending on a lot of variables) I've worked on code where stuff happening in a thread throws an exception and it is never logged, or it just doesn't show up on the console--yet the "executing" code exits out of it's top level loop or whatever code is causing your task to run.

I guess I'm just saying, make sure your tasks are not throwing an exception out.

like image 40
Bill K Avatar answered Oct 28 '22 20:10

Bill K