[prev in list] [next in list] [prev in thread] [next in thread] 

List:       slony1-general
Subject:    Re: [Slony1-general] Dropping table with lots of entries in sl_log
From:       Glyn Astill <glynastill () yahoo ! co ! uk>
Date:       2010-07-05 8:32:46
Message-ID: 725177.65026.qm () web23602 ! mail ! ird ! yahoo ! com
[Download RAW message or body]

> From: Christopher Browne <cbbrowne@ca.afilias.info>
> Subject: Re: [Slony1-general] Dropping table with lots of entries in sl_log
> To: "Glyn Astill" <glynastill@yahoo.co.uk>
> Cc: slony1-general@lists.slony.info
> Date: Friday, 2 July, 2010, 18:55
> Glyn Astill <glynastill@yahoo.co.uk>
> writes:
> > We have a replicated table that stores errors from an
> application that recently went mad and logged about 650000
> records each about 75kb in size.
> > 
> > Our local replication is fine, but we have servers on
> the end of a vpn that are now lagging massively.
> > 
> > If I drop the table from replication will the entries
> in sl_log be cleared out and replication cease for that
> table - or will the drop wait until the tables are in sync?
> 
> SET DROP TABLE does not clear out the entries from
> sl_log_*, so you'd
> find that the DROP would wait until the tables get into
> sync :-(.
> 
> I suppose that what you could do, if you plan to nuke the
> table, is to
> delete those records from sl_log_* by hand.
> 
> That is...  If the table's ID is 17...
> delete from _my_schema.sl_log_1 where
> log_tableid = 17;
> delete from _my_schema.sl_log_2 where
> log_tableid = 17;
> 
> I think I'd want to order it thus...
> 
> 0.  Back up the table some place where it contains the
> data that you think it ought to have
> 
> e.g. - use "pg_dump -t my_log_table >
> my_log_table.sql" to
> preserve you from losing contents.
> 
> 1.  Slonik script to drop table #17
> 
> 2.  delete from sl_log_* against all the nodes
> 
> 3.  Replication should then fairly quickly catch up.
> 
> 4.  You probably want to induce log rotations
> everywhere because
> sl_log_* will be pretty bloated after all
> this.
> 
> select
> _my_schema.logswitch_start()
> 
> wait 10-ish minutes for the log switch to
> complete
> 
> Do it again...
> 
> select
> _my_schema.logswitch_start()
> 
> Be very very careful about step #2, that you're deleting
> the right
> data, as picking the wrong table could rather ruin your
> day!  I'd not
> call this "notably safe," but it's not heinously
> dangerous.

Thanks Chris,

In the end I just took the laissez-faire option, and everything caught up nicely in a \
much shorter space of time than I thought it would.

Nice to know for the future though.

Glyn


      

_______________________________________________
Slony1-general mailing list
Slony1-general@lists.slony.info
http://lists.slony.info/mailman/listinfo/slony1-general


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic