[prev in list] [next in list] [prev in thread] [next in thread] 

List:       logback-user
Subject:    Re: [logback-user] Logback becomes excruciatingly slow during rollover
From:       Chris Hamilton <chamilton-dev () compucafe ! com>
Date:       2016-04-28 13:26:32
Message-ID: D1E6C509-E709-45E4-8381-D9751F4E477B () compucafe ! com
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


I notice that you have defined the files to be rolled with a .zip extension. 

We am using Logback 1.0.13 in my project and experienced issues with log rolling \
causing the JVM effectively halt while the rolled log file is compressed (we were \
using GZ). It appears that the rolling of the log is protected by a synchronous lock \
(makes since) and the compression routine is being called within that lock (not \
good). My only evidence of this is empirical because as I said everything else just \
stopped.

We resolved the problem by simply not using compression in the logback configuration. \
This was done by removing the gz extension (.zip in your case) in the FileNamePattern \
tag. 

We then wrote a cron job to find all the uncompressed log files and compress them. \
This moves the compression of old log files from your application’s process to \
another [cron initiated] process that is not (as) mission critical.

You might want to give this a try...

> On Apr 26, 2016, at 3:13 PM, Vesuvius Prime <vesuvius_prime@hotmail.com> wrote:
> 
> An update:
> 
> The customer disabled Splunk and are still reproducing the problem. They also \
> report that Process Explorer indicates that no other process holds a handle to the \
> log files. 
> 
> From: Vesuvius Prime <vesuvius_prime@hotmail.com>
> Sent: Sunday, April 24, 2016 6:57 PM
> To: logback-user@qos.ch
> Subject: Logback becomes excruciatingly slow during rollover
> 
> Hi all,
> 
> We use Virgo server and it comes with Logback 1.0.7.v20121108-1250. One of the \
> users of our product started complaining that after about a week of using our \
> product, it was becoming so slow that it was basically unusable. We had a live \
> session with the client and it turned out that the problem was in Logback. The \
> client uses Windows. So far the problem has not been reported on Linux. Here's what \
> I did to determine that Logback was the cause of the problem: 
> 1. The appender is defined as:
> 
> <appender name="LOG_FILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
> <file>D:/logs/product.log</file>
> <rollingPolicy class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy">
> <FileNamePattern>D:/logs/product_%i.log.zip</FileNamePattern>
> <MinIndex>0</MinIndex>
> <MaxIndex>9</MaxIndex>
> </rollingPolicy>
> <triggeringPolicy class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy">
> <MaxFileSize>50MB</MaxFileSize>
> </triggeringPolicy>
> <encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
> <Pattern>[%d{yyyy-MM-dd'T'HH:mm:ss.SSSXXX}] [%-5level] %-28.28thread \
> %X{sessionGuid} %-64.64logger{64} %X{medic.eventCode} %msg %ex%n</Pattern> \
> </encoder> </appender>
> 
> In the appender configuration above, I modified the directories and log names a \
> little bit, so that the actual product name would not be visible. 
> 2. All the rolled-over zip files were generated and present, and the main log file, \
> product.log, had exceeded its maximum size of 50 MB and had reached 64 MB. We had \
> something like: 
> product.log           <------- 64 MB
> product_0.log.zip
> product_1.log.zip
> product_2.log.zip
> product_3.log.zip
> product_4.log.zip
> product_5.log.zip
> product_6.log.zip
> product_7.log.zip
> product_8.log.zip
> product_9.log.zip
> 
> 3. This was suspicious, so I examined the log file and saw many messages that were \
> logged out of order, e.g. 
> 09:55:43.676
> 09:55:42.836
> 09:55:46.165
> 09:55:44.583
> ...
> 
> Furthermore, even the simplest operations, e.g. simply sending a redirect response \
> back to the browser, was taking several seconds. (However, before sending the \
> redirect, we were logging the URL to which we were redirecting, so I suspected that \
> the logger was extremely slow.) 
> 3. I moved the archived log files to a different directory and they got recreated \
> *immediately*. I did this 7-8 times, each time with exactly the same result -- the \
> .zip files got recreated immediately. 
> 4. The main log, product.log, was locked by the JVM, so I couldn't move it or \
> delete it. However, I opened it with a text editor, selected the whole content \
> (Ctrl+A), deleted it (Del) and saved the file. As soon as I did that, the \
> performance problems disappeared. The product began working normally. Also, the log \
> file started growing with normal speed. Previously, it was growing super-slowly. 
> 
> So, we told the users to do the following workaround when the preconditions from \
> step 2 above were reached and the product became very slow: 
> (1) Open the main log file, select all, delete, save.
> (2) Delete all archived .zip files
> 
> This workaround has been working fine so far. However, we still don't know why \
> Logback behaves like this. That's why I'm writing this email. Can someone, who \
> knows the Logback source code, figure out what might be happening and what triggers \
> it? 
> Here are some additional details that we've been able to figure out:
> 
> 1. When the users were experiencing the problem during the live session with them, \
> I opened ProcessExplorer and saw that Splunk was holding a lock onto the main log \
> file -- product log. I asked the users to kill the Splunk service and they did, but \
> that didn't solve the problem. However, depending on the way Logback is written, it \
> may turn out that it indeed went crazy because of Splunk, but *after* it goes \
> crazy, killing the other process that hold a lock on the file doesn't help. 
> 2. We managed to reproduce the same behavior in our company by letting two \
> different appenders write to the same log file. Then, as soon as the preconditions \
> on step 2 above were reached, everything became extremely slow, the logs couldn't \
> roll over and the main log started growing beyond its specified limit of 50 MB. \
> However, the users do *not* have such a configuration in which two appenders write \
> to the same file. I'm only giving you information on how we managed to reproduce \
> the slowness in case it might help you figure out what's wrong. The end result is \
> the same -- Logback slows down to a crawl, but apparently the trigger in our users' \
> environment is different and it's not related to appenders writing to the same \
> file. 
> 3. A thread dump while we were reproducing the problem (with two appenders writing \
> to the same file) reveals the following: 
> 3.1. One thread holds a lock (0x00000000dfa5d600):
> 
> "Thread-48" #199 daemon prio=5 os_prio=0 tid=0x0000000024b61000 nid=0x772c runnable \
>                 [0x000000002422e000]
> java.lang.Thread.State: RUNNABLE
> at java.util.zip.Deflater.deflateBytes(Native Method)
> at java.util.zip.Deflater.deflate(Deflater.java:442)
> - locked <0x00000000fdb8dcf0> (a java.util.zip.ZStreamRef)
> at java.util.zip.Deflater.deflate(Deflater.java:364)
> at java.util.zip.DeflaterOutputStream.deflate(DeflaterOutputStream.java:251)
> at java.util.zip.DeflaterOutputStream.write(DeflaterOutputStream.java:211)
> at java.util.zip.ZipOutputStream.write(ZipOutputStream.java:331)
> - locked <0x00000000fdb8da00> (a java.util.zip.ZipOutputStream)
> at ch.qos.logback.core.rolling.helper.Compressor.zipCompress(Compressor.java:110)
> at ch.qos.logback.core.rolling.helper.Compressor.compress(Compressor.java:58)
> at ch.qos.logback.core.rolling.FixedWindowRollingPolicy.rollover(FixedWindowRollingPolicy.java:147)
>  at ch.qos.logback.core.rolling.RollingFileAppender.rollover(RollingFileAppender.java:111)
>                 
> - locked <0x00000000dfa5d460> (a ch.qos.logback.core.spi.LogbackLock)
> at ch.qos.logback.core.rolling.RollingFileAppender.subAppend(RollingFileAppender.java:144)
>                 
> - locked <0x00000000dfa5d600> (a \
> ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy) at \
> ch.qos.logback.core.OutputStreamAppender.append(OutputStreamAppender.java:103) at \
> ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:88)
>  at ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:48)
>  at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:280)
> at ch.qos.logback.classic.Logger.callAppenders(Logger.java:267)
> at ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:449)
> at ch.qos.logback.classic.Logger.filterAndLog_0_Or3Plus(Logger.java:403)
> at ch.qos.logback.classic.Logger.log(Logger.java:803)
> at org.apache.commons.logging.impl.SLF4JLocationAwareLog.warn(SLF4JLocationAwareLog.java:185)
>                 
> ...
> 
> 3.2. A multitude of threads are blocked and waiting to acquire that lock, e.g.:
> 
> "Thread-55" #206 daemon prio=5 os_prio=0 tid=0x000000001f50d000 nid=0x7288 waiting \
>                 for monitor entry [0x00000000263ef000]
> java.lang.Thread.State: BLOCKED (on object monitor)
> at ch.qos.logback.core.rolling.RollingFileAppender.subAppend(RollingFileAppender.java:142)
>                 
> - waiting to lock <0x00000000dfa5d600> (a \
> ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy) at \
> ch.qos.logback.core.OutputStreamAppender.append(OutputStreamAppender.java:103) at \
> ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:88)
>  at ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:48)
>  at ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:280)
> at ch.qos.logback.classic.Logger.callAppenders(Logger.java:267)
> at ch.qos.logback.classic.Logger.buildLoggingEventAndAppend(Logger.java:449)
> at ch.qos.logback.classic.Logger.filterAndLog_0_Or3Plus(Logger.java:403)
> at ch.qos.logback.classic.Logger.log(Logger.java:803)
> at org.apache.commons.logging.impl.SLF4JLocationAwareLog.warn(SLF4JLocationAwareLog.java:185)
>                 
> ...
> 
> 
> 
> I have attached the logging configuration from the Virgo server. It's called \
> serviceability.xml. (That's the default configuration file name for the Virgo \
> server.) 
> I'm sorry the mail turned out so long. I hope this will not discourage you from \
> reading it. I will be extremely grateful if you, guys, can help us figure out what \
> makes Logback go crazy. _______________________________________________
> logback-user mailing list
> logback-user@qos.ch
> http://mailman.qos.ch/mailman/listinfo/logback-user


[Attachment #5 (unknown)]

<html><head><meta http-equiv="Content-Type" content="text/html \
charset=windows-1252"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: \
space; -webkit-line-break: after-white-space;" class="">I notice that you have \
defined the files to be rolled with a .zip extension.&nbsp;<div class=""><br \
class=""></div><div class="">We am using Logback 1.0.13 in my project and experienced \
issues with log rolling causing the JVM effectively halt while the rolled log file is \
compressed (we were using GZ). It appears that the rolling of the log is protected by \
a synchronous lock (makes since) and the compression routine is being called within \
that lock (not good). My only evidence of this is empirical because as I said \
everything else just stopped.</div><div class=""><br class=""></div><div class="">We \
resolved the problem by simply not using compression in the logback configuration. \
This was done by removing the gz extension (.zip in your case) in the FileNamePattern \
tag.&nbsp;</div><div class=""><br class=""></div><div class="">We then wrote a cron \
job to find all the uncompressed log files and compress them. This moves the \
compression of old log files from your application’s process to another [cron \
initiated] process that is not (as) mission critical.</div><div class=""><br \
class=""></div><div class="">You might want to give this a try...</div><div \
class=""><br class=""><div><blockquote type="cite" class=""><div class="">On Apr 26, \
2016, at 3:13 PM, Vesuvius Prime &lt;<a href="mailto:vesuvius_prime@hotmail.com" \
class="">vesuvius_prime@hotmail.com</a>&gt; wrote:</div><br \
class="Apple-interchange-newline"><div class=""><div id="divtagdefaultwrapper" \
style="font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: \
normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; \
text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; \
-webkit-text-stroke-width: 0px; font-size: 12pt; background-color: rgb(255, 255, \
255); font-family: Calibri, Arial, Helvetica, sans-serif;" class=""><div \
style="margin-top: 0px; margin-bottom: 0px;" class="">An update:<br \
class=""></div><br class=""><div style="margin-top: 0px; margin-bottom: 0px;" \
class="">The customer disabled Splunk and are still reproducing the problem. They \
also report that Process Explorer indicates that no other process holds a handle to \
the log files.<br class=""></div><br class=""><br class=""><div style="" class=""><hr \
tabindex="-1" style="display: inline-block; width: 727.15625px;" class=""><div \
id="divRplyFwdMsg" dir="ltr" class=""><font face="Calibri, sans-serif" \
style="font-size: 11pt;" class=""><b class="">From:</b><span \
class="Apple-converted-space">&nbsp;</span>Vesuvius Prime &lt;<a \
href="mailto:vesuvius_prime@hotmail.com" \
class="">vesuvius_prime@hotmail.com</a>&gt;<br class=""><b class="">Sent:</b><span \
class="Apple-converted-space">&nbsp;</span>Sunday, April 24, 2016 6:57 PM<br \
class=""><b class="">To:</b><span class="Apple-converted-space">&nbsp;</span><a \
href="mailto:logback-user@qos.ch" class="">logback-user@qos.ch</a><br class=""><b \
class="">Subject:</b><span class="Apple-converted-space">&nbsp;</span>Logback becomes \
excruciatingly slow during rollover</font><div class="">&nbsp;</div></div><div \
class=""><div id="divtagdefaultwrapper" style="font-size: 12pt; background-color: \
rgb(255, 255, 255); font-family: Calibri, Arial, Helvetica, sans-serif;" \
class=""><div style="margin-top: 0px; margin-bottom: 0px;" class="">Hi all,</div><div \
style="margin-top: 0px; margin-bottom: 0px;" class=""><br class=""></div><div \
style="margin-top: 0px; margin-bottom: 0px;" class="">We use Virgo server and it \
comes with Logback&nbsp;1.0.7.v20121108-1250. One of the users of our product started \
complaining that after about a week of using our product, it was becoming so slow \
that it was basically unusable. We had a live session with the client and it turned \
out that the problem was in Logback. The client uses Windows. So far the problem has \
not been reported on Linux. Here's what I did to determine that Logback was the cause \
of the problem:<br class=""></div><div style="margin-top: 0px; margin-bottom: 0px;" \
class=""><br class=""></div><div style="margin-top: 0px; margin-bottom: 0px;" \
class="">1. The appender is defined as:</div><div style="margin-top: 0px; \
margin-bottom: 0px;" class=""><br class=""></div><div style="margin-top: 0px; \
margin-bottom: 0px;" class="">&nbsp;&nbsp; &lt;appender name="LOG_FILE" \
class="ch.qos.logback.core.rolling.RollingFileAppender"&gt;<br \
class="">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; \
&lt;file&gt;D:/logs/product.log&lt;/file&gt;<br \
class="">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;rollingPolicy \
class="ch.qos.logback.core.rolling.FixedWindowRollingPolicy"&gt;<br \
class="">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; \
&lt;FileNamePattern&gt;D:/logs/product_%i.log.zip&lt;/FileNamePattern&gt;<br \
class="">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; \
&lt;MinIndex&gt;0&lt;/MinIndex&gt;<br \
class="">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; \
&lt;MaxIndex&gt;9&lt;/MaxIndex&gt;<br class="">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; \
&lt;/rollingPolicy&gt;<br class=""></div>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; \
&lt;triggeringPolicy \
class="ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy"&gt;<br \
class="">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; \
&lt;MaxFileSize&gt;50MB&lt;/MaxFileSize&gt;<br \
class="">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;/triggeringPolicy&gt;<br \
class="">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;encoder \
class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"&gt;<br \
class="">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; \
&lt;Pattern&gt;[%d{yyyy-MM-dd'T'HH:mm:ss.SSSXXX}] [%-5level] %-28.28thread \
%X{sessionGuid} %-64.64logger{64} %X{medic.eventCode} %msg %ex%n&lt;/Pattern&gt;<br \
class="">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; &lt;/encoder&gt;<br class="">&nbsp;&nbsp; \
&lt;/appender&gt;<br class=""><div style="margin-top: 0px; margin-bottom: 0px;" \
class=""><br class=""></div><div style="margin-top: 0px; margin-bottom: 0px;" \
class="">In the appender configuration above, I modified the directories and log \
names a little bit, so that the actual product name would not be visible.<br \
class=""></div><div style="margin-top: 0px; margin-bottom: 0px;" class=""><br \
class=""></div><div style="margin-top: 0px; margin-bottom: 0px;" class="">2. All the \
rolled-over zip files were generated and present, and the main log file, product.log, \
had exceeded its maximum size of 50 MB and had reached 64 MB. We had something \
like:</div><div style="margin-top: 0px; margin-bottom: 0px;" class=""><br \
class=""></div><div style="margin-top: 0px; margin-bottom: 0px;" class="">product.log \
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &lt;------- 64 MB</div><div style="margin-top: \
0px; margin-bottom: 0px;" class="">product_0.log.zip</div><p style="margin-top: 0px; \
margin-bottom: 0px;" class=""></p><div style="margin-top: 0px; margin-bottom: 0px;" \
class="">product_1.log.zip</div><div style="margin-top: 0px; margin-bottom: 0px;" \
class="">product_2.log.zip</div><div style="margin-top: 0px; margin-bottom: 0px;" \
class="">product_3.log.zip</div><div style="margin-top: 0px; margin-bottom: 0px;" \
class="">product_4.log.zip</div><div style="margin-top: 0px; margin-bottom: 0px;" \
class="">product_5.log.zip</div><div style="margin-top: 0px; margin-bottom: 0px;" \
class="">product_6.log.zip</div><div style="margin-top: 0px; margin-bottom: 0px;" \
class="">product_7.log.zip</div><div style="margin-top: 0px; margin-bottom: 0px;" \
class="">product_8.log.zip</div><div style="margin-top: 0px; margin-bottom: 0px;" \
class="">product_9.log.zip</div><p style="margin-top: 0px; margin-bottom: 0px;" \
class=""></p><div style="margin-top: 0px; margin-bottom: 0px;" class=""><br \
class=""></div><div style="margin-top: 0px; margin-bottom: 0px;" class="">3. This was \
suspicious, so I examined the log file and saw many messages that were logged out of \
order, e.g.</div><div style="margin-top: 0px; margin-bottom: 0px;" class=""><br \
class=""></div><div style="margin-top: 0px; margin-bottom: 0px;" \
class="">09:55:43.676<br class="">09:55:42.836<br class="">09:55:46.165<br \
class="">09:55:44.583<br class="">...</div><div style="margin-top: 0px; \
margin-bottom: 0px;" class=""><br class=""></div><div style="margin-top: 0px; \
margin-bottom: 0px;" class="">Furthermore, even the simplest operations, e.g. simply \
sending a redirect response back to the browser, was taking several seconds. \
(However, before sending the redirect, we were logging the URL to which we were \
redirecting, so I suspected that the logger was extremely slow.)<br \
class=""></div><div style="margin-top: 0px; margin-bottom: 0px;" class=""><br \
class=""></div><div style="margin-top: 0px; margin-bottom: 0px;" class="">3.&nbsp;I \
moved the archived log files&nbsp;to a different directory and they got recreated \
*immediately*. I did this 7-8 times, each time with exactly the same result -- the \
.zip files got recreated immediately.</div><div style="margin-top: 0px; \
margin-bottom: 0px;" class=""><br class=""></div><div style="margin-top: 0px; \
margin-bottom: 0px;" class="">4. The main log, product.log, was locked by the JVM, so \
I couldn't move it or delete it. However, I opened it with a text editor, selected \
the whole content (Ctrl+A), deleted it (Del) and saved the file. As soon as I did \
that, the performance problems disappeared. The product began working normally. Also, \
the log file started growing with normal speed. Previously, it was growing \
super-slowly.</div><div style="margin-top: 0px; margin-bottom: 0px;" class=""><br \
class=""></div><div style="margin-top: 0px; margin-bottom: 0px;" class=""><br \
class=""></div><div style="margin-top: 0px; margin-bottom: 0px;" class="">So, we told \
the users to do the following workaround when the preconditions from step 2 above \
were reached and the product became very slow:</div><div style="margin-top: 0px; \
margin-bottom: 0px;" class=""><br class=""></div><div style="margin-top: 0px; \
margin-bottom: 0px;" class="">(1) Open the main log file, select all, delete, \
save.</div><div style="margin-top: 0px; margin-bottom: 0px;" class="">(2) Delete all \
archived .zip files</div><div style="margin-top: 0px; margin-bottom: 0px;" \
class=""><br class=""></div><div style="margin-top: 0px; margin-bottom: 0px;" \
class="">This workaround has been working fine so far. However, we still don't know \
why Logback behaves like this. That's why I'm writing this email. Can&nbsp;someone, \
who knows the Logback source code, figure out what might be happening and what \
triggers it?</div><div style="margin-top: 0px; margin-bottom: 0px;" class=""><br \
class=""></div><div style="margin-top: 0px; margin-bottom: 0px;" class="">Here are \
some additional details that we've been able to figure out:</div><div \
style="margin-top: 0px; margin-bottom: 0px;" class=""><br class=""></div><div \
style="margin-top: 0px; margin-bottom: 0px;" class="">1. When the users were \
experiencing the problem during the live session with them, I opened ProcessExplorer \
and saw that Splunk was holding a lock onto the main log file -- product log. I asked \
the users to kill the Splunk service and they did, but that didn't solve the problem. \
However, depending on the way Logback is written, it may turn out that it indeed went \
crazy because of Splunk, but *after* it goes crazy, killing the other process that \
hold a lock on the file doesn't help.</div><div style="margin-top: 0px; \
margin-bottom: 0px;" class=""><br class=""></div><div style="margin-top: 0px; \
margin-bottom: 0px;" class="">2. We managed to reproduce the same behavior in our \
company by letting two different appenders write to the same log file. Then, as soon \
as the preconditions on step 2 above were reached, everything became extremely slow, \
the logs couldn't roll over and the main log started growing beyond its specified \
limit of 50 MB. However, the users do *not* have such a configuration in which two \
appenders write to the same file. I'm only giving you information on how we managed \
to reproduce the slowness in case it might help you figure out what's wrong. The end \
result is the same -- Logback slows down to a crawl, but apparently the trigger in \
our users' environment is different and it's not related to appenders writing to the \
same file.</div><div style="margin-top: 0px; margin-bottom: 0px;" class=""><br \
class=""></div><div style="margin-top: 0px; margin-bottom: 0px;" class="">3. A thread \
dump while we were reproducing the problem (with two appenders writing to the same \
file) reveals the following:</div><div style="margin-top: 0px; margin-bottom: 0px;" \
class=""><br class=""></div><div style="margin-top: 0px; margin-bottom: 0px;" \
class="">3.1. One thread holds a lock (0x00000000dfa5d600):</div><div \
style="margin-top: 0px; margin-bottom: 0px;" class=""><br class=""></div><div \
style="margin-top: 0px; margin-bottom: 0px;" class="">"Thread-48" #199 daemon prio=5 \
os_prio=0 tid=0x0000000024b61000 nid=0x772c runnable [0x000000002422e000]<br \
class="">&nbsp;&nbsp; java.lang.Thread.State: RUNNABLE<br class="">&nbsp;&nbsp; \
&nbsp;at java.util.zip.Deflater.deflateBytes(Native Method)<br class="">&nbsp;&nbsp; \
&nbsp;at java.util.zip.Deflater.deflate(Deflater.java:442)<br class="">&nbsp;&nbsp; \
&nbsp;- locked &lt;0x00000000fdb8dcf0&gt; (a java.util.zip.ZStreamRef)<br \
class="">&nbsp;&nbsp; &nbsp;at java.util.zip.Deflater.deflate(Deflater.java:364)<br \
class="">&nbsp;&nbsp; &nbsp;at \
java.util.zip.DeflaterOutputStream.deflate(DeflaterOutputStream.java:251)<br \
class="">&nbsp;&nbsp; &nbsp;at \
java.util.zip.DeflaterOutputStream.write(DeflaterOutputStream.java:211)<br \
class="">&nbsp;&nbsp; &nbsp;at \
java.util.zip.ZipOutputStream.write(ZipOutputStream.java:331)<br \
class="">&nbsp;&nbsp; &nbsp;- locked &lt;0x00000000fdb8da00&gt; (a \
java.util.zip.ZipOutputStream)<br class="">&nbsp;&nbsp; &nbsp;at \
ch.qos.logback.core.rolling.helper.Compressor.zipCompress(Compressor.java:110)<br \
class="">&nbsp;&nbsp; &nbsp;at \
ch.qos.logback.core.rolling.helper.Compressor.compress(Compressor.java:58)<br \
class="">&nbsp;&nbsp; &nbsp;at \
ch.qos.logback.core.rolling.FixedWindowRollingPolicy.rollover(FixedWindowRollingPolicy.java:147)<br \
class="">&nbsp;&nbsp; &nbsp;at \
ch.qos.logback.core.rolling.RollingFileAppender.rollover(RollingFileAppender.java:111)<br \
class="">&nbsp;&nbsp; &nbsp;- locked &lt;0x00000000dfa5d460&gt; (a \
ch.qos.logback.core.spi.LogbackLock)<br class="">&nbsp;&nbsp; &nbsp;at \
ch.qos.logback.core.rolling.RollingFileAppender.subAppend(RollingFileAppender.java:144)<br \
class="">&nbsp;&nbsp; &nbsp;- locked &lt;0x00000000dfa5d600&gt; (a \
ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy)<br class="">&nbsp;&nbsp; \
&nbsp;at ch.qos.logback.core.OutputStreamAppender.append(OutputStreamAppender.java:103)<br \
class="">&nbsp;&nbsp; &nbsp;at \
ch.qos.logback.core.UnsynchronizedAppenderBase.doAppend(UnsynchronizedAppenderBase.java:88)<br \
class="">&nbsp;&nbsp; &nbsp;at \
ch.qos.logback.core.spi.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:48)<br \
class="">&nbsp;&nbsp; &nbsp;at \
ch.qos.logback.classic.Logger.appendLoopOnAppenders(Logger.java:280)<br \
class="">&nbsp;&nbsp; &nbsp;at \
ch.qos.logback.classic.Logger.callAppenders(Logger.java:267)<br class="">&nbsp;&nbsp; \


[Attachment #6 (text/plain)]

_______________________________________________
logback-user mailing list
logback-user@qos.ch
http://mailman.qos.ch/mailman/listinfo/logback-user

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic