I'm not sure the term "bombing" is the right one. By bombing, I mean a log happening many times with the same content (message and args).
For example, a denial of service attach might cause a log warning that an unauthenticated user is trying to access some API. log:
[03-29 11:26:01.000] missing token
[03-29 11:26:01.001] missing token
[03-29 11:26:01.005] missing token
... overall 100000 times
The problem I'm trying to tackle is preventing the log from being very large. Size is a problem. Also, other important messages might not be seen because of the overwhelming number of the repeating message.
I'd like to be able to prevent such a log bombing and have some aggregated message like:
[03-29 11:26:01.000] missing token
[03-29 11:26:06.000] missing token [silenced. Overall 100000 times]
So, I'm looking for 2 features here:
Any idea if and how this can be done using logback? Perhaps another logging tool can support this? Thanks.
Logback does not allow logging to be disabled from the command line. However, if the configuration file allows it, you can set the level of loggers on the command line via a Java system property.
Key Difference Between Log4j vs LogbackAs logback is improved, version log4j and versions log4j2 and logback have no difference in terms of performance or any features. Therefore log4j is the most used logging utility before the logback newer versions were invented.
Logback uses the same concepts as Log4j. So it's no surprise that even if they are using different file formats, their configurations are very similar. The following code snippet shows the same configuration as I used with Log4j.
logback. core. AppenderBase class is an abstract class implementing the Appender interface. It provides basic functionality shared by all appenders, such as methods for getting or setting their name, their activation status, their layout and their filters. It is the super-class of all appenders shipped with logback.
I created a generic DuplicateActionFilterByInsertTime
class with a small implementation for logging.
Example usage:
DuplicateLogFilter logSilencer = new DuplicateLogFilter(10000);// 10 secs filter
Logger logger = Logger.getLogger(DuplicateActionFilterByInsertTimeTest.class.getName());
logSilencer.log(new LoggerMessage(logger, Level.INFO, "Hello {0}", new Object[]{"Alik"})));
DuplicateLogFilter:
class DuplicateLogFilter extends DuplicateActionFilterByInsertTime<LoggerMessage> {
DuplicateLogFilter(int filterMillis) {
super(filterMillis);
addListener(new DuplicateActionFilterByInsertTime.Listener<LoggerMessage>() {
@Override
public void onFilteringFinished(FilteredItem<LoggerMessage> filteredItem) {
filteredItem.getItem().getLogger().log(Level.INFO, filteredItem.getItem().getMessage() + ". Filtered. Overall " + filteredItem.getSilenceInfo().getCount() + " messages", filteredItem.getItem().getParams());
}
@Override
public void onFilteringStarted(LoggerMessage loggerMessage) {
loggerMessage.getLogger().log(Level.INFO, loggerMessage.getMessage() + ". Filtering duplicate logs...", loggerMessage.getParams());
}
});
}
void log(LoggerMessage loggerMessage) {
run(loggerMessage);
}
}
Base class that actually does all the filtering
(`DuplicateActionFilterByInsertTime`:
public class DuplicateActionFilterByInsertTime<E extends Runnable> {
private static final Logger LOGGER = Logger.getLogger(DuplicateActionFilterByInsertTime.class.getName());
private final long filterMillis;
private final ConcurrentHashMap<E, FilterInfoImpl> actionMap = new ConcurrentHashMap<>();
private final ConcurrentLinkedQueue<E> actionQueue = new ConcurrentLinkedQueue<>();
private final ScheduledExecutorService scheduledExecutorService = Executors.newSingleThreadScheduledExecutor();
private final AtomicBoolean purgerRegistered = new AtomicBoolean(false);
private final Set<Listener<E>> listeners = ConcurrentHashMap.newKeySet();
public DuplicateActionFilterByInsertTime(int filterMillis) {
this.filterMillis = filterMillis;
}
public FilterInfo get(E e) {
FilterInfoImpl insertionData = actionMap.get(e);
if (insertionData == null || insertionData.isExpired(filterMillis)) {
return null;
}
return insertionData;
}
public boolean run(E e) {
actionMap.computeIfPresent(e, (e1, insertionData) -> {
int count = insertionData.incrementAndGet();
if (count == 2) {
notifyFilteringStarted(e1);
}
return insertionData;
});
boolean isNew = actionMap.computeIfAbsent(e, e1 -> {
FilterInfoImpl insertionData = new FilterInfoImpl();
actionQueue.add(e1);
return insertionData;
}).getCount() == 1;
tryRegisterPurger();
if (isNew) {
e.run();
}
return isNew;
}
private void tryRegisterPurger() {
if (actionMap.size() != 0 && purgerRegistered.compareAndSet(false, true)) {
scheduledExecutorService.schedule(() -> {
try {
for (Iterator<E> iterator = actionQueue.iterator(); iterator.hasNext(); ) {
E e = iterator.next();
FilterInfoImpl insertionData = actionMap.get(e);
if (insertionData == null || insertionData.isExpired(filterMillis)) {
iterator.remove();
}
if (insertionData != null && insertionData.isExpired(filterMillis)) {
FilterInfoImpl removed = actionMap.remove(e);
FilteredItem<E> filteredItem = new FilteredItem<>(e, removed);
notifyFilteringFinished(filteredItem);
} else {
// All the elements that were left shouldn't be purged.
break;
}
}
} finally {
purgerRegistered.set(false);
tryRegisterPurger();
}
}, filterMillis, TimeUnit.MILLISECONDS);
}
}
private void notifyFilteringFinished(FilteredItem<E> filteredItem) {
new Thread(() -> listeners.forEach(l -> {
try {
l.onFilteringFinished(filteredItem);
} catch (Exception e) {
LOGGER.log(Level.WARNING, "Purge notification failed. Continuing to next one (if exists)", e);
}
})).start();
}
private void notifyFilteringStarted(final E e) {
new Thread(() -> listeners.forEach(l -> {
try {
l.onFilteringStarted(e);
} catch (Exception e1) {
LOGGER.log(Level.WARNING, "Filtering started notification failed. Continuing to next one (if exists)", e1);
}
})).start();
}
public void addListener(Listener<E> listener) {
listeners.add(listener);
}
public void removeLister(Listener<E> listener) {
listeners.remove(listener);
}
public interface FilterInfo {
long getInsertTimeMillis();
int getCount();
}
public interface Listener<E> {
void onFilteringStarted(E e);
void onFilteringFinished(FilteredItem<E> filteredItem);
}
private static class FilterInfoImpl implements FilterInfo {
private final long insertTimeMillis = System.currentTimeMillis();
private AtomicInteger count = new AtomicInteger(1);
int incrementAndGet() {
return count.incrementAndGet();
}
@Override
public long getInsertTimeMillis() {
return insertTimeMillis;
}
@Override
public int getCount() {
return count.get();
}
boolean isExpired(long expirationMillis) {
return insertTimeMillis + expirationMillis < System.currentTimeMillis();
}
}
public static class FilteredItem<E> {
private final E item;
private final FilterInfo filterInfo;
FilteredItem(E item, FilterInfo filterInfo) {
this.item = item;
this.filterInfo = filterInfo;
}
public E getItem() {
return item;
}
public FilterInfo getFilterInfo() {
return filterInfo;
}
}
}
Source.
The DuplicateMessageFilter
, which filters out exact duplicate messages, might fit your first requirement. Currently, the filter stops all duplicate messages after a specified threshold (via allowedRepititions
setting), which might not be desirable. You would have to extend the filter if you preferred to have the duplicate-count to reset on new messages or based on time. It does not provide a summary of silenced logs, however.
Example logback config:
<configuration>
<turboFilter class="ch.qos.logback.classic.turbo.DuplicateMessageFilter" allowedRepetitions="2"/>
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%date [%thread] %-5level %logger - %msg%n</pattern>
</encoder>
</appender>
<root level="INFO">
<appender-ref ref="console" />
</root>
</configuration>
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With