[prev in list] [next in list] [prev in thread] [next in thread] 

List:       postgresql-admin
Subject:    Re: Upgrade from PG12 to PG
From:       Jef Mortelle <jefmortelle () gmail ! com>
Date:       2023-07-24 12:59:29
Message-ID: f22e1380-12eb-10ed-5b3d-cd6b0b6a6324 () gmail ! com
[Download RAW message or body]

correction: -k = --link


On 24/07/2023 14:59, Jef Mortelle wrote:
> Hello
>
> This is the syntax:
>
> export PGDATA=/pg/PG15/system
> export PATH=/usr/lib/postgresql15/bin:/bin:/usr/bin:/usr/local/bin
>
> export PGDATAOLD=/pg/data
> export PGDATANEW=/pg/PG15/system
> export PGBINOLD=/usr/lib/postgresql12/bin
> export PGBINNEW=/usr/lib/postgresql15/bin
>
> /usr/lib/postgresql15/bin/pg_upgrade -r -v -p 5431 -P 5432 -k -j 8
>
>  -r =--link
>
> Kind regards
>
> On 24/07/2023 14:52, Scott Ribe wrote:
>>> On Jul 24, 2023, at 12:38 AM, Jef Mortelle <jefmortelle@gmail.com> 
>>> wrote:
>>> For some reason Postgres creates a new subdirectory for each PG 
>>> version (I make use of tablespaces for each database in my PG 
>>> cluster), also with using the link option.
>>> So after some upgrade,  it ends in a really mess with directory's?
>> At the end of pg_upgrade, you can start up the old version against 
>> the old directory, or the new version against the new directory. 
>> (With --link, only until writing into the db, then you are committed 
>> to the running version.) Once you are comfortable that everything is 
>> good with the new version, you should delete the old data. 
>> Alternatively, if there is a problem forcing you back to the old 
>> version, you delete the new data.
>>
>>> => pg_dump schema_only, after RAM upgrade from 8GB up to 64GB 
>>> (otherwise the query against pg_largeobject ends in a OUT of Memory 
>>> error) runs in about 3-4 minutes
>>> => pg_restore takes 7 hours, which is 99% used for executing the 
>>> query like:  SELECT pg_catalog.lo_unlink('oid');
>> Given the tests you've run, it seems to me that it is doing something 
>> which it ought not when using --link.
>>
>>> Database is 95GB, so not so big ;-) but have ~25miljon large objects 
>>> in it.
>> I suppose the use of large objects here is an artifact of support for 
>> other databases which have much lower limits on varchar column length.
>>
>>


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic