[prev in list] [next in list] [prev in thread] [next in thread] 

List:       pgsql-bugs
Subject:    Re: BUG #16159: recovery requests WALs for the next timelines before timeline switch LSN has been re
From:       Pavel Suderevsky <psuderevsky () gmail ! com>
Date:       2020-01-28 16:13:32
Message-ID: CAEBTBzsi8HrTTSC1xwL0otzrwdefRSLuwdQ7LnAs+jQeN+EJ8A () mail ! gmail ! com
[Download RAW message or body]

Hi,

Thank you very much for your explanation and sorry for delay with an
answer.
But for me it still seems that PostgreSQL has enough information to check
that no WALs exist for the new timeline to omit searching all the
possibly-existing WALs.

> 0000005300004AE1000000A3
> rmgr: Standby     len (rec/tot):     62/    62, tx:          0, lsn:
> 4AE1/A3000028, prev 4AE1/A28EC5F8, desc: RUNNING_XACTS nextXid 82249762
> latestCompletedXid 82249758 oldestRunningXid 82249759; 1 xacts: 82249759
> rmgr: XLOG        len (rec/tot):    106/   106, tx:          0, lsn:
> 4AE1/A3000068, prev 4AE1/A3000028, desc: CHECKPOINT_SHUTDOWN redo
> 4AE1/A3000068; tli 83; prev tli 83; fpw true; xid 0:82249762; oid 1074976;
> multi 144; offset 4568; oldest xid 562 in DB 1; oldest multi 1 in DB 1;
> oldest/newest commit timestamp xid: 0/0; oldest running xid 0; shutdown
> rmgr: XLOG        len (rec/tot):     24/    24, tx:          0, lsn:
> 4AE1/A30000D8, prev 4AE1/A3000068, desc: SWITCH
> 0000005400004AE1000000A4
> rmgr: XLOG        len (rec/tot):    106/   106, tx:          0, lsn:
> 4AE1/A4000028, prev 4AE1/A30000D8, desc: CHECKPOINT_SHUTDOWN redo
> 4AE1/A4000028; tli 83; prev tli 83; fpw true; xid 0:82249762; oid 1074976;
> multi 144; offset 4568; oldest xid 562 in DB 1; oldest multi 1 in DB 1;
> oldest/newest commit timestamp xid: 0/0; oldest running xid 0; shutdown
> rmgr: XLOG        len (rec/tot):     42/    42, tx:          0, lsn:
> 4AE1/A4000098, prev 4AE1/A4000028, desc: END_OF_RECOVERY tli 84; prev tli
> 83; time 2020-01-28 06:29:03.432938 CST
> 00000054.history
> 83      4AE1/A4000098   no recovery target specified
>
It can just look through the first received new-timeline's WAL and ensure
timeline switch occured in this WAL. Finally, it can check archive for the
only one possibly-existing previous WAL.

Regading influence: issue is not about the large amount of WALs to apply
but in searching for the non-existing WALs on the remote storage, each such
search can take 5-10 seconds while obtaining existing WAL takes
milliseconds.

Regards,
Pavel Suderevsky

чт, 12 дек. 2019 г. в 06:49, Kyotaro Horiguchi <horikyota.ntt@gmail.com>:

> Hello.
>
> In short, it is not a bug.
>
> At Wed, 11 Dec 2019 12:39:20 +0000, PG Bug reporting form <
> noreply@postgresql.org> wrote in
> > The following bug has been logged on the website:
> >
> > Bug reference:      16159
> > Logged by:          Pavel Suderevsky
> > Email address:      psuderevsky@gmail.com
> > PostgreSQL version: 11.6
> > Operating system:   CentOS 7.6.1810 (3.10.0-957.el7.x86_64)
> > Description:
> >
> > Reproduced 11.2,11.6.
> >
> > If PostgreSQL starts recovery and finds a history file for a timeline
> that
> > is higher than current one, it will request file with the segment for the
> > future timeline (that most likely doesn't exist yet) and only then it
> will
> > request file with the segment for current timeline.
>
> The cause of the "future" timeline is that the standby has received
> the history file for TLI=22 but has not completed replaying the first
> checkpoint after promotion. In that case, WAL files before the
> timeline switch should not exist for TLI=22 and PostgreSQL is making
> sure that by peeking the archive for the file.
>
> Since standby always starts archive recovery from the REDO location of
> the last checkpoint performed on the standby(or the restart point),
> the WAL amount to read is irrelevant to promotion.
>
> > If archive is located on remote storage it can take huge time to find
> that
> > segments for the future timelines are not exist yet and therefore
> recovery
> > can take too long.
>
> I don't think that peeking non-existent remote files takes comparable
> amount of time to 16MB transfer. If the problem is the amount of WAL
> files to transfer during recovery, I came up of three ways to make
> standby startup faster.
>
> 1. For operational shutdown/restarts, make sure that the latest
>  restart point is close enough to the replay location on the standby
>  before shutting down. If not, manual checkpoint on the master then
>  that on the standby would help. The functions pg_control_checkpoint()
>  and pg_last_wal_replay_lsn() would work for checking that condition.
>
> 2. PostgreSQL 11 accepts "always" for the archive_mode GUC setting. It
>  enables standby-side archiving.
>
> https://www.postgresql.org/docs/11/runtime-config-wal.html#GUC-ARCHIVE-MODE
>
> 3. Decrease max_wal_size or checkopint_timeout on the master, and/or
>  decrease checkpoint_timeout on the standby.  This decreases the
>  amount of time needed during recovery.
>
> regards.
>
> --
> Kyotaro Horiguchi
> NTT Open Source Software Center
>

[Attachment #3 (text/html)]

<div dir="ltr"><div dir="ltr">Hi,<div><br></div><div>Thank you very much for your \
explanation and sorry for delay with an answer.  <br>But for me it still seems that \
PostgreSQL has enough information to check that no WALs exist for the new timeline to \
omit searching all the possibly-existing WALs.</div><blockquote class="gmail_quote" \
style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex">0000005300004AE1000000A3<br>rmgr: Standby       \
len (rec/tot):       62/      62, tx:               0, lsn: 4AE1/A3000028, prev \
4AE1/A28EC5F8, desc: RUNNING_XACTS nextXid 82249762 latestCompletedXid 82249758 \
oldestRunningXid 82249759; 1 xacts: 82249759<br>rmgr: XLOG            len (rec/tot):  \
106/    106, tx:               0, lsn: 4AE1/A3000068, prev 4AE1/A3000028, desc: \
CHECKPOINT_SHUTDOWN redo 4AE1/A3000068; tli 83; prev tli 83; fpw true; xid \
0:82249762; oid 1074976; multi 144; offset 4568; oldest xid 562 in DB 1; oldest multi \
1 in DB 1; oldest/newest commit timestamp xid: 0/0; oldest running xid 0; \
shutdown<br>rmgr: XLOG            len (rec/tot):       24/      24, tx:               \
0, lsn: 4AE1/A30000D8, prev 4AE1/A3000068, desc: SWITCH \
<br>0000005400004AE1000000A4<br>rmgr: XLOG            len (rec/tot):      106/    \
106, tx:               0, lsn: 4AE1/A4000028, prev 4AE1/A30000D8, desc: \
CHECKPOINT_SHUTDOWN redo 4AE1/A4000028; tli 83; prev tli 83; fpw true; xid \
0:82249762; oid 1074976; multi 144; offset 4568; oldest xid 562 in DB 1; oldest multi \
1 in DB 1; oldest/newest commit timestamp xid: 0/0; oldest running xid 0; \
shutdown<br>rmgr: XLOG            len (rec/tot):       42/      42, tx:               \
0, lsn: 4AE1/A4000098, prev 4AE1/A4000028, desc: END_OF_RECOVERY tli 84; prev tli 83; \
time 2020-01-28 06:29:03.432938 CST<br>00000054.history<br>83         4AE1/A4000098   \
no recovery target specified<br></blockquote><div>It can just look through the first \
received new-timeline&#39;s WAL and ensure timeline switch occured in this WAL. \
Finally, it can check archive for the only one possibly-existing previous \
WAL.</div><div><br></div><div>Regading influence: issue is not about the large amount \
of WALs to apply but in searching for the non-existing WALs on the remote storage, \
each such search can take 5-10 seconds while obtaining existing WAL takes \
milliseconds.<br><br>Regards,<br>Pavel Suderevsky</div></div><br><div \
class="gmail_quote"><div dir="ltr" class="gmail_attr">чт, 12 дек. 2019 г. в \
06:49, Kyotaro Horiguchi &lt;<a \
href="mailto:horikyota.ntt@gmail.com">horikyota.ntt@gmail.com</a>&gt;:<br></div><blockquote \
class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex">Hello.<br> <br>
In short, it is not a bug.<br>
<br>
At Wed, 11 Dec 2019 12:39:20 +0000, PG Bug reporting form &lt;<a \
href="mailto:noreply@postgresql.org" target="_blank">noreply@postgresql.org</a>&gt; \
wrote in <br> &gt; The following bug has been logged on the website:<br>
&gt; <br>
&gt; Bug reference:         16159<br>
&gt; Logged by:               Pavel Suderevsky<br>
&gt; Email address:         <a href="mailto:psuderevsky@gmail.com" \
target="_blank">psuderevsky@gmail.com</a><br> &gt; PostgreSQL version: 11.6<br>
&gt; Operating system:     CentOS 7.6.1810 (3.10.0-957.el7.x86_64)<br>
&gt; Description:            <br>
&gt; <br>
&gt; Reproduced 11.2,11.6.<br>
&gt; <br>
&gt; If PostgreSQL starts recovery and finds a history file for a timeline that<br>
&gt; is higher than current one, it will request file with the segment for the<br>
&gt; future timeline (that most likely doesn&#39;t exist yet) and only then it \
will<br> &gt; request file with the segment for current timeline.<br>
<br>
The cause of the &quot;future&quot; timeline is that the standby has received<br>
the history file for TLI=22 but has not completed replaying the first<br>
checkpoint after promotion. In that case, WAL files before the<br>
timeline switch should not exist for TLI=22 and PostgreSQL is making<br>
sure that by peeking the archive for the file.<br>
<br>
Since standby always starts archive recovery from the REDO location of<br>
the last checkpoint performed on the standby(or the restart point),<br>
the WAL amount to read is irrelevant to promotion.<br>
<br>
&gt; If archive is located on remote storage it can take huge time to find that<br>
&gt; segments for the future timelines are not exist yet and therefore recovery<br>
&gt; can take too long.<br>
<br>
I don&#39;t think that peeking non-existent remote files takes comparable<br>
amount of time to 16MB transfer. If the problem is the amount of WAL<br>
files to transfer during recovery, I came up of three ways to make<br>
standby startup faster.<br>
<br>
1. For operational shutdown/restarts, make sure that the latest<br>
  restart point is close enough to the replay location on the standby<br>
  before shutting down. If not, manual checkpoint on the master then<br>
  that on the standby would help. The functions pg_control_checkpoint()<br>
  and pg_last_wal_replay_lsn() would work for checking that condition.<br>
<br>
2. PostgreSQL 11 accepts &quot;always&quot; for the archive_mode GUC setting. It<br>
  enables standby-side archiving.<br>
<br>
<a href="https://www.postgresql.org/docs/11/runtime-config-wal.html#GUC-ARCHIVE-MODE" \
rel="noreferrer" target="_blank">https://www.postgresql.org/docs/11/runtime-config-wal.html#GUC-ARCHIVE-MODE</a><br>
 <br>
3. Decrease max_wal_size or checkopint_timeout on the master, and/or<br>
  decrease checkpoint_timeout on the standby.   This decreases the<br>
  amount of time needed during recovery.<br>
<br>
regards.<br>
<br>
-- <br>
Kyotaro Horiguchi<br>
NTT Open Source Software Center<br>
</blockquote></div></div>



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic