Issue
I am trying to persist the logs of a Spring Boot Application, however, since the logs generated are large I am trying to use the logback.xml to roll the file greater than 350MB into a compressed file.
I am able to roll a couple of MB's per day but midway the service starts writing to a temp file. I have tried both "TimeBasedRollingPolicy" and "Size AndTimeBasedRollingPolicy" with Triggering Policy of "SizeAndTimeBasedFNATP" but the results are unchanged. The .tmp files are generated every time.
My Logback.xml looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>/home/xyz/logs/ProdLog.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>/home/xyz/logs/log_%d{yyyy-MM-dd}_%i.log.zip</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<!-- or whenever the file size reaches 350MB -->
<maxFileSize>350MB</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<maxHistory>5</maxHistory>
<!--<maxFileSize>350MB</maxFileSize>-->
</rollingPolicy>
<encoder>
<pattern>%date [%thread] %-5level %logger{35} - %msg%n</pattern>
</encoder>
</appender>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<layout class="ch.qos.logback.classic.PatternLayout">
<Pattern>
%d{yyyy-MM-dd HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n
</Pattern>
</layout>
</appender>
<root level="INFO">
<appender-ref ref="FILE"/>
<appender-ref ref="STDOUT"/>
</root>
</configuration>
I see that the ticket for logback .tmp file issue is marked closed on Jira. Could someone help with what needs to be modified here to avoid generating the temp files?
Solution
I´m having the same issue using logback 1.2.3, apparently the bug is fixed in 1.3.0 version but I´ve found the lines of code in charge of generating those tmp files and managed to avoid it.
This is code from TimeBasedRollingPolicy.java:
public void rollover() throws RolloverFailure {
// when rollover is called the elapsed period's file has
// been already closed. This is a working assumption of this method.
String elapsedPeriodsFileName = timeBasedFileNamingAndTriggeringPolicy.getElapsedPeriodsFileName();
String elapsedPeriodStem = FileFilterUtil.afterLastSlash(elapsedPeriodsFileName);
if (compressionMode == CompressionMode.NONE) {
if (getParentsRawFileProperty() != null) {
renameUtil.rename(getParentsRawFileProperty(), elapsedPeriodsFileName);
} // else { nothing to do if CompressionMode == NONE and parentsRawFileProperty == null }
} else {
if (getParentsRawFileProperty() == null) {
compressionFuture = compressor.asyncCompress(elapsedPeriodsFileName, elapsedPeriodsFileName, elapsedPeriodStem);
} else {
compressionFuture = renameRawAndAsyncCompress(elapsedPeriodsFileName, elapsedPeriodStem);
}
}
if (archiveRemover != null) {
Date now = new Date(timeBasedFileNamingAndTriggeringPolicy.getCurrentTime());
this.cleanUpFuture = archiveRemover.cleanAsynchronously(now);
}
}
Future<?> renameRawAndAsyncCompress(String nameOfCompressedFile, String innerEntryName) throws RolloverFailure {
String parentsRawFile = getParentsRawFileProperty();
String tmpTarget = nameOfCompressedFile + System.nanoTime() + ".tmp";
renameUtil.rename(parentsRawFile, tmpTarget);
return compressor.asyncCompress(tmpTarget, nameOfCompressedFile, innerEntryName);
}
As you can see here, if you set a "fileName" into the appender, that method is called and generates a tmp file (which is not bad). I think the problem comes when there are threads that don´t stop logging on that tmp file, so at the end there are threads writing on tmp file and others writing on new "fileName" file.
If you just set fileNamePattern tag and not the file tag, then no tmp files should be generated.
I hope this helps you!
Answered By - Miguel Morata