[prev in list] [next in list] [prev in thread] [next in thread] 

List:       pgsql-performance
Subject:    Re: [PERFORM] Very poor performance loading 100M of sql data using
From:       Greg Smith <gsmith () gregsmith ! com>
Date:       2008-04-29 15:58:00
Message-ID: Pine.GSO.4.64.0804291149450.8414 () westnet ! com
[Download RAW message or body]

On Tue, 29 Apr 2008, John Rouillard wrote:

> So swap the memory usage from the OS cache to the postgresql process.
> Using 1/4 as a guideline it sounds like 600,000 (approx 4GB) is a
> better setting. So I'll try 300000 to start (1/8 of memory) and see
> what it does to the other processes on the box.

That is potentially a good setting.  Just be warned that when you do hit a 
checkpoint with a high setting here, you can end up with a lot of data in 
memory that needs to be written out, and under 8.2 that can cause an ugly 
spike in disk writes.  The reason I usually threw out 30,000 as a 
suggested starting figure is that most caching disk controllers can buffer 
at least 256MB of writes to keep that situation from getting too bad. 
Try it out and see what happens, just be warned that's the possible 
downside of setting shared_buffers too high and therefore you might want 
to ease into that more gradually (particularly if this system is shared 
with other apps).
x
--
* Greg Smith gsmith@gregsmith.com http://www.gregsmith.com Baltimore, MD

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic