Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

boost::thread data structure sizes on the ridiculous side?

Compiler: clang++ x86-64 on linux.

It has been a while since I have written any intricate low level system code, and I ussualy program against the system primitives (windows and pthreads/posix). So, the in#s and out's have slipped from my memory. I am working with boost::asio and boost::thread at the moment.

In order to emulate synchronous RPC against an asynchronous function executor (boost::io_service with multiple threads io::service::run'ing where requests are io_serviced::post'ed), I am using boost synchronization primitives. For curiosities sake I decided to sizeof the primitives. This is what I get to see.

struct notification_object
{
  bool ready;
  boost::mutex m;
  boost::condition_variable v;
};
...
std::cout << sizeof(bool) << std::endl;
std::cout << sizeof(boost::mutex) << std::endl;
std::cout << sizeof(boost::condition_variable) << std::endl;
std::cout << sizeof(notification_object) << std::endl;
...

Output:

1
40
88
136

Forty bytes for a mutex ?? ?? ? WTF ! 88 for a condition_variable !!! Please keep in mind that I'm repulsed by this bloated size because I am thinking of an application that could create hundreds of notification_object's

This level of overhead for portability seems ridiculous, can someone justify this? As far as I can remember these primitives should be 4 or 8 bytes wide depending on the memory model of the CPU.

like image 834
Hassan Syed Avatar asked Jul 25 '11 12:07

Hassan Syed


1 Answers

When you look at "size overhead" for any type of synchronization primitive, keep in mind that these cannot be packed too closely. That is so because e.g. two mutexes sharing a cacheline would end up in cache trashing (false sharing) if they're in-use concurrently, even if the users acquiring these locks never "conflict". I.e. imagine two threads running two loops:

for (;;) {
    lock(lockA);
    unlock(lockA);
}

and

for (;;) {
    lock(lockB);
    unlock(lockB);
}

You will see twice the number of iterations when run on two different threads compared to one thread running one loop if and only if the two locks are not within the same cacheline. If lockA and lockB are in the same cacheline, the number of iterations per thread will half - because the cacheline with those two locks in will permanently bounce between the cpu cores executing these two threads.

Hence even though the actual data size of the primitive data type underlying a spinlock or mutex might only be a byte or a 32bit word, the effective data size of such an object is often larger.

Keep that in mind before asserting "my mutexes are too large". In fact, on x86/x64, 40 Bytes is too small to prevent false sharing, as cachelines there are currently at least 64 Bytes.

Beyond that, if you're highly concerned about memory usage, consider that notification objects need not be unique - condition variables can serve to trigger for different events (via the predicate that boost::condition_variable knows about). It'd therefore be possible to use a single mutex/CV pair for a whole state machine instead of one such pair per state. Same goes for e.g. thread pool synchronization - having more locks than threads is not necessarily beneficial.

Edit: For a few more references on "false sharing" (and the negative performance impact caused by hosting multiple atomically-updated variables within the same cacheline), see (amongst others) the following SO postings:

  • false sharing in boost::detail::spinlock_pool?
  • False sharing and pthreads
  • False Sharing and Atomic Variables

As said, when using multiple "synchronization objects" (whether that'd be atomically-updated variables, locks, semaphores, ...) in a multi-core, cache-per-core config, allow each of them a separate cacheline of space. You're trading memory usage for scalability here, but really, if you get into the region where your software needs several millions of locks (making that GBs of mem), you either have the funding for a few hundred GB of memory (and a hundred CPU cores), or you're doing something wrong in your software design.

In most cases (a lock / an atomic for a specific instance of a class / struct), you get the "padding" for free as long as the object instance that contains the atomic variable is large enough.

like image 69
FrankH. Avatar answered Sep 29 '22 23:09

FrankH.