[prev in list] [next in list] [prev in thread] [next in thread] 

List:       postgresql-general
Subject:    Re: PostgreSQL 11 Auto vacuum
From:       Michael Lewis <mlewis () entrata ! com>
Date:       2019-06-28 19:37:01
Message-ID: CAHOFxGrFzT+gShBzFmoyg-tmEj8ps8m4SO=TPo0h0sufPJqznA () mail ! gmail ! com
[Download RAW message or body]

>
> Actually we have notice that Auto vacuum in PG10  keeps vacuuming the
> master tables and  that takes a lot of time and Don't go the child table to
> remove the dead tuples.
>

What do the logs say actually got done during these long running
autovacuums? Is it feasible to increase the work allowed before autovacuum
stops (autovacuum_vacuum_cost_limit) or perhaps increase the number of
workers? What is the update/deletes workload balance? That is, would it
make sense to decrease the fillfactor on these tables such that you get
more HOT (heap only tuple) updates and indexes are less bloated to get
better performance that way? How often are you manually vacuuming?

[Attachment #3 (text/html)]

<div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" \
style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div dir="auto"><div dir="auto">Actually we have \
notice that Auto vacuum in PG10   keeps vacuuming the master tables and   that takes \
a lot of time and Don&#39;t go the child table to remove the dead \
tuples.</div></div></blockquote><div><br></div><div>What do the logs say actually got \
done during these long running autovacuums? Is it feasible to increase the work \
allowed before autovacuum stops (autovacuum_vacuum_cost_limit) or perhaps increase \
the number of workers? What is the update/deletes workload balance? That is, would it \
make sense to decrease the fillfactor on these tables such that you get more HOT \
(heap only tuple) updates and indexes are less bloated to get better performance that \
way? How often are you manually vacuuming?</div></div></div>



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic