I'm toying around with the idea of building a logging system that pushes log statements to an internal buffer until it reaches a pre-defined capacity, and then dumps (flushes) the whole buffer at once.
This is because I like to sprinkle lots of TRACE statements all throughout my methods (so I can see what's going on every few lines; makes it easier to debug, at least for me). And I'm afraid that with (potentially) hundreds/thousands of log statements firing all over the place that such a large I/O demand will bog my programs down.
A "buffered" logger solution might alleviate this.
Three questions:
Don't reinvent this particular wheel if you can possibly avoid it. Look at Log4j or better slf4j.
Log4j and slf4j are both very performant if you're not tracing, so in the production system you can turn down the level of logging and still have good performance.
Both log4j and slf4j write immediately to the logfiles and flush, don't do buffering by default, for the very good reason that you want to see in the logfile the exception which caused your crash. If you really want to add buffering you can do so (FileAppender#bufferedIO)
As far as finalize() is concerned, it is not guaranteed to be called on exit. From System#runFinalizersOnExit.
Deprecated. This method is inherently unsafe. It may result in finalizers being called on live objects while other threads are concurrently manipulating those objects, resulting in erratic behavior or deadlock. Enable or disable finalization on exit; doing so specifies that the finalizers of all objects that have finalizers that have not yet been automatically invoked are to be run before the Java runtime exits. By default, finalization on exit is disabled.
My emphasis. So, no it seems like a buffered logger would have inherent problems.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With