Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why and when shouldn't I kill a thread?

I am writing a multithreaded socket server and I need to know for sure.

Articles about threads say that I should wait for the thread to return, instead of killing it. In some cases though, the user's thread i want to kick/ban, will not be able to return properly (for example, I started to send a big block of data and send() blocks the thread at the moment) so I'll need just to kill it.

Why killing thread functions are dangerous and when can they crash the whole application?

like image 404
Pythagoras of Samos Avatar asked Nov 10 '10 21:11

Pythagoras of Samos


People also ask

Can a thread be killed?

A thread is automatically destroyed when the run() method has completed. But it might be required to kill/stop a thread before it has completed its life cycle. Previously, methods suspend(), resume() and stop() were used to manage the execution of threads.

Why is thread stop unsafe?

Because it is inherently unsafe. Stopping a thread causes it to unlock all the monitors that it has locked. (The monitors are unlocked as the ThreadDeath exception propagates up the stack.)

Does killing a process kill all threads?

When you kill process, everything that process owns, including threads is also killed. The Terminated property is irrelevant. The system just kills everything.

Can a thread kill a process?

Threads are an integral part of the process and cannot be killed outside it.


4 Answers

Killing a thread means stopping all execution exactly where it is a the moment. In particular, it will not execute any destructors. This means sockets and files won't be closed, dynamically-allocated memory will not be freed, mutexes and semaphores won't be released, etc. Killing a thread is almost guaranteed to cause resource leaks and deadlocks.

Thus, your question is kind of reversed. The real question should read:

When, and under what conditions can I kill a thread?

So, you can kill the thread when you're convinced no leaks and deadlocks can occur, not now, and not when the other thread's code will be modified (thus, it is pretty much impossible to guarantee).


In your specific case, the solution is to use non-blocking sockets and check some thread/user-specific flag between calls to send()and recv(). This will likely complicate your code, which is probably why you've been resisting to do so, but it's the proper way to go about it.

Moreover, you will quickly realize that a thread-per-client approach doesn't scale, so you'll change your architecture and re-write lots of it anyways.

like image 87
André Caron Avatar answered Sep 20 '22 14:09

André Caron


Killing a thread can cause your program to leak resources because the thread did not get a chance to clean up after itself. Consider closing the socket handle the thread is sending on. This will cause the blocking send() to return immediately with an appropriate error code. The thread can then clean up and die peacefully.

like image 38
Ferruccio Avatar answered Sep 23 '22 14:09

Ferruccio


If you kill your thread the hard way it can leak resources.

You can avoid it when you design your thread to support cancelation.

Do not use blocking calls or use blocking calls with a timeout. Receive or send data in smaller chunks or asynchronously.

like image 21
frast Avatar answered Sep 22 '22 14:09

frast


You really don't want to do this.

If you kill a thread while it holds a critical section it won't be released which will likely result in your whole application breaking. Certain C library calls like heap memory allocation use critical sections and if you happen to kill your thread while it's doing a "new" then calling new from anywhere else in your program will cause that thread to stop.

You simply can't do this safely without really extreme measures which are much more restrictive than simply signalling the thread to terminate itsself.

like image 36
jcoder Avatar answered Sep 23 '22 14:09

jcoder