[prev in list] [next in list] [prev in thread] [next in thread] 

List:       postgresql-admin
Subject:    Re: [ADMIN] Memory limits and PostgreSQL 8.4 - Over 63 GB of memory
From:       "Nicholson, Brad (Toronto, ON, CA)" <bnicholson () hp ! com>
Date:       2011-03-22 14:13:05
Message-ID: 2626AEE4839D064CB0472A3814DC403F46D211B5B7 () GVW1092EXB ! americas ! hpqcorp ! net
[Download RAW message or body]

> -----Original Message-----
> From: Tapio Pitkäranta [mailto:Tapio.Pitkaranta@relex.fi]
> Sent: Monday, March 21, 2011 7:53 AM
> To: Nicholson, Brad (Toronto, ON, CA)
> Cc: pgsql-admin@postgresql.org; Devrim GÜNDÜZ
> Subject: RE: [ADMIN] Memory limits and PostgreSQL 8.4 - Over 63 GB of
> memory for DB?
> 
> Hello Brad,
> 
> Thank you for this information.
> 
> We have database tables that are around 50-100 GB each (table). While
> processing such tables, it seems to be crucial that the table fits into
> memory (especially if the database table is not on a SSD drive).
> 
> Until now we have thought "shared_buffers" parameter should be more
> than the size of the biggest table (that requires this kind of batch
> processing).
> 
> Do you think it does not matter what size we set the "shared_buffers"
> parameter, as long as the server has enough memory? (Even if the single
> table is this size: 50-100 GB)
> 
> Why are large shared buffers not recommended?

There is the potential to dirty a lot buffers and have huge pauses at checkpoint time \
when those get flushed to disk.  Remember though that unused memory is going to go to \
your filesystem cache and there is very good chance that a lot of the tables you are \
accessing are still going to be in memory.

By all means, test it out.  It may be that it is fine with your workload, \
particularly if it is read only.  But be prepared for it to not work out as expected.

Brad.

-- 
Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic