Using logback I would like to start new log every time an async job starts so I need to call rollover manually. But when I try to get appender I get null instead. Below is my config:
<configuration scan="true">
<timestamp key="time" datePattern="yyyy-MM-dd_HH_mm"/>
<logger name="com.my.com.pany" level="DEBUG">
<appender name="TEST" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>logs/log_TEST_${time}.log</file>
<triggeringPolicy
class="com.my.com.pany.myapp.logging.ManualRollingPolicy">
</triggeringPolicy>
<append>true</append>
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
</logger>
</configuration>
I call rollover like this:
ch.qos.logback.classic.Logger logF = (ch.qos.logback.classic.Logger) LoggerFactory.getLogger("com.my.com.pany");
RollingFileAppender<ILoggingEvent> appender = (RollingFileAppender<ILoggingEvent>) logF.getAppender("test");
appender.rollover();
I extended TimeBasedRollingPolicy<E>
so that my log will start when i trigger async job:
@NoAutoStart
public class ManualRollingPolicy<E> extends TimeBasedRollingPolicy<E> {
}
Could somone help me with this issue?
EDIT: Upon some further investigation I can see that LogF
has appenderList
of size 1 which has my custom RollingPolicy
properly set. However name
property of this appender is set to null and I think that is the reason why I can't get it by name.
Size-based rolling policy allows to rollover based on file on each log file. For example, we can rollover to a new file when the log file reaches 10 MB in size. The maxFileSize is used to specify the size of each file when it gets rolled over.
Appenders place log messages in their final destinations. A Logger can have more than one Appender. We generally think of Appenders as being attached to text files, but Logback is much more potent than that. Layout prepares messages for outputting.
The lastest version of logback.qos.ch (1.1. 7) supports the property "totalSizeCap". This property can be used to limit the total size of archived log files. Currently, the top level nifi pom. xml specifies version 1.1.
The maxHistory property controls the maximum number of archive files to keep, deleting older files. For example, if you specify monthly rollover, and set maxHistory to 6, then 6 months worth of archives files will be kept with files older than 6 months deleted.
So I got a pretty neat workaround which I think someone may find helpful. You can use SiftingAppender. "As its name implies, a SiftingAppender can be used to separate (or sift) logging according to a given runtime attribute"
So you configure your appender to take some unique parameter, in my case date of batch job start:
<appender name="FULL" class="ch.qos.logback.classic.sift.SiftingAppender">
<discriminator>
<key>id</key>
<defaultValue>000000</defaultValue>
</discriminator>
<sift>
<appender name="FULL-${id}" class="ch.qos.logback.core.FileAppender">
<file>logs/log_${id}.log</file>
<append>false</append>
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</layout>
</appender>
</sift>
</appender>
And then you manually call:
MDC.put("id",id);
in order to start new log and when you are done with this paticular log file you just log a FINALIZE_SESSION_MARKER
constant:
logger.info(ClassicConstants.FINALIZE_SESSION_MARKER)
. In my opinion it is flexible enough that it answers the question of manuall rollover.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With