[prev in list] [next in list] [prev in thread] [next in thread] 

List:       postgresql-admin
Subject:    Re: [ADMIN] poor performance of loading data
From:       Chris Ruprecht <chrup999 () yahoo ! com>
Date:       2001-12-20 0:55:06
[Download RAW message or body]

Anna,

At 12:57 -0500 12/19/2001, Zhang, Anna wrote:
>I just installed Postgres 7.1.3 on my Red Hat 7.2 linux box. We are doing
>research to see how postgres doing, I used copy utility to import data from
>a text file which contains 32 mils rows, it has been 26 hours passed, but
>still running. My question is how postgres handles such data loading? it
>commited every row? or commit point is adjustable? How? Does postgres
>provide direct load to disk files like oracle? Other ways to speed up? If
>loading performance can't be improved significantly, we have to go back to
>oracle. Anybody can help? Thanks!

Did you edit the PostGreSQL startup parameters? if not, you have a 
very sluggish system. You should have an ample supply of shared 
memory (2000 - 8000 blocks of 8K each). Also, do you have an index 
defined on your table? If so, drop the index, load the records, then, 
re-create the index(es) for the table. Speed will improve 
tremendously for copy commands with no index to update. creating an 
index for all 32M records is much faster than having to insert index 
values.

You might find your file loaded in a fraction of the time is has used 
already with these optimizations applied. Without knowing more 
specifics, this is all I can recommend.

Best regards,
Chris
-- 
Chris Ruprecht
Network grunt and bit pusher extraordinaíre

_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com


---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://archives.postgresql.org
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic