[prev in list] [next in list] [prev in thread] [next in thread] 

List:       pgsql-performance
Subject:    Re: [PERFORM] Setting autovacuum_vacuum_scale_factor to 0 a good idea ?
From:       Sébastien_Lorion <sl () thestrangefactory ! com>
Date:       2012-09-15 2:35:14
Message-ID: CAGa5y0OaBfJOepkt-oB+iV1SXnj_a+En3YVsYiDbx_E=9pX2iw () mail ! gmail ! com
[Download RAW message or body]

Ah I see... I thought that by running the vacuum more often, its cost would
be divided in a more or less linear fashion, with a base constant cost.
While I read about the vacuum process, I did not check the source code or
even read about the actual algorithm, so I am sorry for having asked a
nonsensical question :)

It was theoretical, my current database does what you suggest, but I might
increase workers as about 10 tables see a heavy update rate and are quite
large compared to the others.

S=C3=A9bastien

On Fri, Sep 14, 2012 at 5:49 PM, Josh Berkus <josh@agliodbs.com> wrote:

>
> > I am pondering about this... My thinking is that since *_scale_factor
> need
> > to be set manually for largish tables (>1M), why not
> > set autovacuum_vacuum_scale_factor and autovacuum_analyze_scale_factor,
> and
> > increase the value of autovacuum_vacuum_threshold to, say, 10000, and
> > autovacuum_analyze_threshold
> > to 2500 ? What do you think ?
>
> I really doubt you want to be vacuuming a large table every 10,000 rows.
>  Or analyzing every 2500 rows, for that matter.  These things aren't
> free, or we'd just do them constantly.
>
> Manipulating the analyze thresholds for a large table make sense; on
> tables of over 10m rows, I often lower autovacuum_analyze_scale_factor
> to 0.02 or 0.01, to get them analyzed a bit more often.  But vacuuming
> them more often makes no sense.
>
> > Also, with systems handling 8k-10k tps and dedicated to a single
> database,
> > would there be any cons to decreasing autovacuum_naptime to say 15s, so
> > that the system perf is less spiky ?
>
> You might also want to consider more autovacuum workers.  Although if
> you've set the thresholds as above, that's the reason autovacuum is
> always busy and not keeping up ...
>
> --
> Josh Berkus
> PostgreSQL Experts Inc.
> http://pgexperts.com
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org=
)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>

[Attachment #3 (text/html)]

Ah I see... I thought that by running the vacuum more often, its cost would be \
divided in a more or less linear fashion, with a base constant cost. While I read \
about the vacuum process, I did not check the source code or even read about the \
actual algorithm, so I am sorry for having asked a nonsensical  question :)  <div>

<br></div><div>It was theoretical, my current database does what you suggest, but I \
might increase workers as about 10 tables see a heavy update rate and are quite large \
compared to the others.<br><div><br></div><div>Sébastien<br>

<div><br><div class="gmail_quote">On Fri, Sep 14, 2012 at 5:49 PM, Josh Berkus <span \
dir="ltr">&lt;<a href="mailto:josh@agliodbs.com" \
target="_blank">josh@agliodbs.com</a>&gt;</span> wrote:<br><blockquote \
class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc \
solid;padding-left:1ex">

<div class="im"><br>
&gt; I am pondering about this... My thinking is that since *_scale_factor need<br>
&gt; to be set manually for largish tables (&gt;1M), why not<br>
&gt; set autovacuum_vacuum_scale_factor and autovacuum_analyze_scale_factor, and<br>
&gt; increase the value of autovacuum_vacuum_threshold to, say, 10000, and<br>
&gt; autovacuum_analyze_threshold<br>
&gt; to 2500 ? What do you think ?<br>
<br>
</div>I really doubt you want to be vacuuming a large table every 10,000 rows.<br>
  Or analyzing every 2500 rows, for that matter.   These things aren&#39;t<br>
free, or we&#39;d just do them constantly.<br>
<br>
Manipulating the analyze thresholds for a large table make sense; on<br>
tables of over 10m rows, I often lower autovacuum_analyze_scale_factor<br>
to 0.02 or 0.01, to get them analyzed a bit more often.   But vacuuming<br>
them more often makes no sense.<br>
<div class="im"><br>
&gt; Also, with systems handling 8k-10k tps and dedicated to a single database,<br>
&gt; would there be any cons to decreasing autovacuum_naptime to say 15s, so<br>
&gt; that the system perf is less spiky ?<br>
<br>
</div>You might also want to consider more autovacuum workers.   Although if<br>
you&#39;ve set the thresholds as above, that&#39;s the reason autovacuum is<br>
always busy and not keeping up ...<br>
<span class="HOEnZb"><font color="#888888"><br>
--<br>
Josh Berkus<br>
PostgreSQL Experts Inc.<br>
<a href="http://pgexperts.com" target="_blank">http://pgexperts.com</a><br>
<br>
<br>
--<br>
Sent via pgsql-performance mailing list (<a \
href="mailto:pgsql-performance@postgresql.org">pgsql-performance@postgresql.org</a>)<br>
 To make changes to your subscription:<br>
<a href="http://www.postgresql.org/mailpref/pgsql-performance" \
target="_blank">http://www.postgresql.org/mailpref/pgsql-performance</a><br> \
</font></span></blockquote></div><br></div></div></div>



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic