[prev in list] [next in list] [prev in thread] [next in thread] 

List:       postgresql-general
Subject:    Re: [GENERAL] disk backups
From:       Tom Lane <tgl () sss ! pgh ! pa ! us>
Date:       2000-06-30 16:21:10
[Download RAW message or body]

Martijn van Oosterhout <kleptog@cupid.suninternet.com> writes:
> Tom Lane wrote:
>> pg_dump shouldn't be a performance hog if you are using the default
>> COPY-based style of data export.  I'd only expect memory problems
>> if you are using INSERT-based export (-d or -D switch to pg_dump).

> Aha! Thanks for that! Last time I asked here nobody answered...
> So it only happens with an INSERT based export, didn't know
> that (though I can't see why there would be a difference...)

COPY uses a streaming style of output.  To generate INSERT commands,
pg_dump first does a "SELECT * FROM table", and that runs into libpq's
suck-the-whole-result-set-into-memory behavior.  See nearby thread
titled "Large Tables(>1 Gb)".

> Yes, we are using -D, mainly because we've had "issues" with
> the COPY based export, ie, it won't read the resulting file
> back. Admittedly this was a while ago now and I havn't checked 
> since.

IIRC that's a long-since-fixed bug.  If not, file a bug report so
we can fix whatever's still wrong...

> I was thinking to write my own version of pg_dump that would
> do that but also allow specifying of ordering constraint, ie,
> clustering. Maybe it would be better to just switch to the 
> other output format...

Philip Warner needs alpha testers for his new version of pg_dump ;-).
Unfortunately I think he's only been talking about it on pghackers
so far.

			regards, tom lane

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic