[prev in list] [next in list] [prev in thread] [next in thread] 

List:       freebsd-hackers
Subject:    Re: [vfs] buf_daemon() slows down write() severely on low-speed CPU
From:       Konstantin Belousov <kostikbel () gmail ! com>
Date:       2012-03-21 20:38:28
Message-ID: 20120321203828.GW2358 () deviant ! kiev ! zoral ! com ! ua
[Download RAW message or body]


On Thu, Mar 15, 2012 at 08:00:41PM +0100, Svatopluk Kraus wrote:
> 2012/3/15 Konstantin Belousov <kostikbel@gmail.com>:
> > On Tue, Mar 13, 2012 at 01:54:38PM +0100, Svatopluk Kraus wrote:
> >> On Mon, Mar 12, 2012 at 7:19 PM, Konstantin Belousov
> >> <kostikbel@gmail.com> wrote:
> >> > On Mon, Mar 12, 2012 at 04:00:58PM +0100, Svatopluk Kraus wrote:
> >> >> Hi,
> >> >>
> >> >> š šI have solved a following problem. If a big file (according to
> >> >> 'hidirtybuffers') is being written, the write speed is very poor.
> >> >>
> >> >> š šIt's observed on system with elan 486 and 32MB RAM (i.e., low speed
> >> >> CPU and not too much memory) running FreeBSD-9.
> >> >>
> >> >> š šAnalysis: A file is being written. All or almost all dirty buffers
> >> >> belong to the file. The file vnode is almost all time locked by
> >> >> writing process. The buf_daemon() can not flush any dirty buffer as a
> >> >> chance to acquire the file vnode lock is very low. A number of dirty
> >> >> buffers grows up very slow and with each new dirty buffer slower,
> >> >> because buf_daemon() eats more and more CPU time by looping on dirty
> >> >> buffers queue (with very low or no effect).
> >> >>
> >> >> š šThis slowing down effect is started by buf_daemon() itself, when
> >> >> 'numdirtybuffers' reaches 'lodirtybuffers' threshold and buf_daemon()
> >> >> is waked up by own timeout. The timeout fires at 'hz' period, but
> >> >> starts to fire at 'hz/10' immediately as buf_daemon() fails to reach
> >> >> 'lodirtybuffers' threshold. When 'numdirtybuffers' (now slowly)
> >> >> reaches ((lodirtybuffers + hidirtybuffers) / 2) threshold, the
> >> >> buf_daemon() can be waked up within bdwrite() too and it's much worse.
> >> >> Finally and with very slow speed, the 'hidirtybuffers' or
> >> >> 'dirtybufthresh' is reached, the dirty buffers are flushed, and
> >> >> everything starts from beginning...
> >> > Note that for some time, bufdaemon work is distributed among bufdaemon
> >> > thread itself and any thread that fails to allocate a buffer, esp.
> >> > a thread that owns vnode lock and covers long queue of dirty buffers.
> >>
> >> However, the problem starts when numdirtybuffers reaches
> >> lodirtybuffers count and ends around hidirtybuffers count. There are
> >> still plenty of free buffers in system.
> >>
> >> >>
> >> >> š šOn the system, a buffer size is 512 bytes and the default
> >> >> thresholds are following:
> >> >>
> >> >> š švfs.hidirtybuffers = 134
> >> >> š švfs.lodirtybuffers = 67
> >> >> š švfs.dirtybufthresh = 120
> >> >>
> >> >> š šFor example, a 2MB file is copied into flash disk in about 3
> >> >> minutes and 15 second. If dirtybufthresh is set to 40, the copy time
> >> >> is about 20 seconds.
> >> >>
> >> >> š šMy solution is a mix of three things:
> >> >> š š1. Suppresion of buf_daemon() wakeup by setting bd_request to 1 in
> >> >> the main buf_daemon() loop.
> >> > I cannot understand this. Please provide a patch that shows what do
> >> > you mean there.
> >> >
> >> š š š curthread->td_pflags |= TDP_NORUNNINGBUF | TDP_BUFNEED;
> >> š š š mtx_lock(&bdlock);
> >> š š š for (;;) {
> >> - š š š š š š bd_request = 0;
> >> + š š š š š š bd_request = 1;
> >> š š š š š š š mtx_unlock(&bdlock);
> > Is this a complete patch ? The change just causes lost wakeups for bufdaemon,
> > nothing more.
> Yes, it's a complete patch. And exactly, it causes lost wakeups which are:
> 1. !! UNREASONABLE !!, because bufdaemon is not sleeping,
> 2. not wanted, because it looks that it's correct behaviour for the
> sleep with hz/10 period. However, if the sleep with hz/10 period is
> expected to be waked up by bd_wakeup(), then bd_request should be set
> to 0 just before sleep() call, and then bufdaemon behaviour will be
> clear.
No, your description is wrong.

If bufdaemon is unable to flush enough buffers and numdirtybuffers still
greater then lodirtybuffers, then bufdaemon enters qsleep state
without resetting bd_request, with timeouts of one tens of second.
Your patch will cause all wakeups for this case to be lost. This is
exactly the situation when we want bufdaemon to run harder to avoid
possible deadlocks, not to slow down.

> 
> All stuff around bd_request and bufdaemon sleep is under bd_lock, so
> if bd_request is 0 and bufdaemon is not sleeping, then all wakeups are
> unreasonable! The patch is about that mainly.
Wakeups itself are very cheap for the running process. Mostly, it comes
down to locking sleepq and waking all threads that are present in the
sleepq blocked queue. If there is no threads in queue, nothing is done.

> 
> >
> >>
> >> I read description of bd_request variable. However, bd_request should
> >> serve as an indicator that buf_daemon() is in sleep. I.e., the
> >> following paradigma should be used:
> >>
> >> mtx_lock(&bdlock);
> >> bd_request = 0; š š/* now, it's only time when wakeup() will be meaningful */
> >> sleep(&bd_request, ..., hz/10);
> >> bd_request = 1; š /* in case of timeout, we must set it (bd_wakeup()
> >> already set it) */
> >> mtx_unlock(&bdlock);
> >>
> >> My patch follows the paradigma. What happens without the patch in
> >> described problem: buf_daemon() fails in its job and goes to sleep
> >> with hz/10 period. It supposes that next early wakeup will do nothing
> >> too. bd_request is untouched but buf_daemon() doesn't know if its last
> >> wakeup was made by bd_wakeup() or by timeout. So, bd_request could be
> >> 0 and buf_daemon() can be waked up before hz/10 just by bd_wakeup().
> >> Moreover, setting bd_request to 0 when buf_daemon() is not in sleep
> >> can cause time consuming and useless wakeup() calls without effect.

[Attachment #3 (application/pgp-signature)]

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic