[prev in list] [next in list] [prev in thread] [next in thread] 

List:       pgsql-performance
Subject:    Re: [PERFORM] Background writer underemphasized ...
From:       Greg Smith <gsmith () gregsmith ! com>
Date:       2008-04-23 5:11:01
Message-ID: Pine.GSO.4.64.0804201106100.12647 () westnet ! com
[Download RAW message or body]

On Sun, 20 Apr 2008, James Mansion wrote:

> Are you suggesting that the disk subsystem has already decided on its 
> strategy for a set of seeks and writes and will not insert new 
> instructions into an existing elevator plan until it is completed and it 
> looks at the new requests?

No, just that each component only gets to sort across what it sees, and 
because of that the sorting horizon may not be optimized the same way 
depending on how writes are sent.

Let me try to construct a credible example of this elusive phenomenon:

-We have a server with lots of RAM
-The disk controller cache has 256MB of cache
-We have 512MB of data to write that's spread randomly across the database 
disk.

Case 1:  Write early

Let's say the background writer writes a sample of 1/2 the data right now 
in anticipation of needing those buffers for something else soon.  It's 
now in the controller's cache and sorted already.  The controller is 
working on it.  Presume it starts at the beginning of the disk and works 
its way toward the end, seeking past gaps in between as needed.

The checkpoint hits just after that happens.  The remaining 256MB gets 
dumped into the OS buffer cache.  This gets elevator sorted by the OS, 
which will now write it out to the card in sorted order, beginning to end. 
But writes to the controller will block because most of the cache is 
filled, so they trickle in as data writes are completed and the cache gets 
space.  Let's presume they're all ignored, because the drive is working 
toward the end and these are closer to the beginning than the ones it's 
working on.

Now the disk is near the end of its logical space, and there's a cache 
full of new dirty checkpoint data.  But the OS has finished spooling all 
its dirty stuff into the cache so the checkpoint is over.  During that 
checkpoint the disk has to seek enough to cover the full logical "length" 
of the volume.  The controller will continue merrily writing now until its 
cache clears again, moving from the end of the disk back to the beginning 
again.

Case 2:  Delayed writes, no background writer use

The checkpoint hits.  512MB of data gets dumped into the OS cache.  It 
sorts and feeds that in sorted order into the cache.  Drive starts at the 
beginning and works it way through everything.  By the time it's finished 
seeking its way across half the disk, the OS is now unblocked becuase the 
remaining data is in the cache.

Can you see how in this second case, it may very well be that the 
checkpoint finishes *faster* because we waited longer to start writing? 
Because the OS has a much larger elevator sorting capacity than the disk 
controller, leaving data in RAM and waiting until there's more of it 
queued up there has approximately halved the number/size of seeks involved 
before the controller can say it's absorbed all the writes.

> This sounds a bit tenuous at best - almost to the point of being a 
> bug. Do you believe this is universal?

Of course not, or the background writer would be turned off by default. 
There are occasional reports where it just gets in the way, typically in 
ones where the controller has its own cache and there's a bad interaction 
there.

This is not unique to this situation, so in that sense this class of 
problems is universal.  There's all kinds of operating sytems 
configurations that are tuned to delay writing in hopes of making those 
writes more efficient, because the OS usually has a much larger capacity 
for buffering pages to optimize what's going to happen than the downstream 
controller/disk caches do.  Once you've filled a downstream cache, you may 
not be able to influence what that device executing those requests does 
anymore until that cache clears.

Note that the worst-case situation here actually gets worse in some 
respects the larger the downstream cache is, because there's that much 
more data you have to wait to clear before you can necessarily influence 
what the disks are doing if you've made a bad choice in what you asked it 
to write early.  If the disk head is too far away from where you want to 
write or read to now, you can be in for quite a wait before it gets back 
your way if the filled cache is large.

--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic