I’m working on debug logging infrastructure for a server application. Each logging point in source code specifies its level (CRITICAL, ERROR, etc.) among other parameters. So in source code logging point looks as:
DBG_LOG_HIGH( … )
which is a macro that expands to
if ( CURRENT_DEBUG_LOG_LEVEL >= DEBUG_LOG_LEVEL_HIGH ) {
// prepare and emit log record
}
where DEBUG_LOG_LEVEL_HIGH
is a predefined constant (let’s say 2) and CURRENT_DEBUG_LOG_LEVEL
is some expression that evaluates to the current debug logging level set by the user.
The simplest approach would be to define CURRENT_DEBUG_LOG_LEVEL
as:
extern int g_current_debug_log_level;
#define CURRENT_DEBUG_LOG_LEVEL (g_current_debug_log_level)
I would like to allow user to change the current debug logging level during the application execution and its okay for the change to take a few seconds to take effect. The application is multi-threaded and changes to g_current_debug_log_level
can be easily serialized (for instance by CRITICAL_SECTION
) but in order not to impact performance expression ( CURRENT_DEBUG_LOG_LEVEL >= DEBUG_LOG_LEVEL_HIGH )
should execute as fast as possible so I would like to avoid using any thread synchronization mechanism there.
So my questions are:
Can the absence of synchronization in g_current_debug_log_level
reads cause incorrect value to be read? While it should not affect application correctness because user could have set the current debug logging level to the incorrect value anyway it might affect the application performance because it might cause it to emit very high volume of debug log for uncontrollable period of time.
Will my solution guarantee that change in the current debug logging level will reach all the threads after the acceptable amount of time (let’s say a few seconds)? Ideally I would like level change operation to be synchronous so that when user receives acknowledgement on level change operation she can count on subsequent log to be emitted according the new level.
I would also greatly appreciate any suggestions for alternative implementations that satisfies the above requirements (minimal performance impact for level comparison and synchronous level change with no more than a few seconds latency).
There is nothing that requires that a write made on one thread on one core will ever become visible to another thread reading on another core, without providing some sort of fence to create a 'happens before' edge between the write and the read.
So to be strictly correct, you would need to insert the appropriate memory fence / barrier instructions after the write to the log level, and before each read. Fence operations aren't cheap, but they are cheaper than a full blown mutex.
In practice though, given a concurrent application that is using locking elsewhere, and the given fact that your program will continue to operate more or less correctly if the write does not become visible, it is likely that the write will become visible incidentally due to other fencing operations within a short timescale and meet your requirements. So you can probably get away with just writing it and skipping the fences.
But using proper fencing to enforce the happens before edge is really the correct answer. FWIW, C++11 provides an explicit memory model which defines the semantics and exposes these sorts of fencing operations at the language level. But as far as I know no compiler yet implements the new memory model. So for C/C++ you need use lock from a library or explicit fencing.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With