[prev in list] [next in list] [prev in thread] [next in thread] 

List:       pgsql-bugs
Subject:    [BUGS] Segmentation Fault in Postgres server when using psqlODBC
From:       Joshua Berry <yoberi () gmail ! com>
Date:       2013-05-24 16:55:19
Message-ID: CAPmZXM3DHRCBi4i3ZcKPOKqi1kd67t-1zHpS=72_1i=Z2PJL5Q () mail ! gmail ! com
[Download RAW message or body]

Hi Groups,

I'm dealing with periodic backend process segmentation faults. I'm posting
to both the bugs and odbc lists as it seems that my application's use of
pgsqlODBC triggers a bug in the postgres backend.

The environment is:
Clients: win32 clients using various version of the psqlODBC driver (mostly
using 8.04.0200). The connection string contains "UseDeclareFetch=1" which
causes long idle transactions, heavy cursor and savepoint use.
Server: Dell dual Xeon x86 48GB RAM (12GB PG shared mem) RHEL system with
Postgresql 9.2.4. Note that these same issues occurred on PG9.1 and PG8.4.

I've experienced these issues for over a year now. However, during that
time several things have changed which may or may not be related:
* The schema has been modified with heavier use of triggers to update
manually created materialized views (ie not using the 9.3 CREATE
MATERIALIZED VIEW).
* The number of concurrent users has increased (from perhaps 15 concurrent
users two years ago, to 30 concurrent users now).
* The PG version used has changed from 8.4, to 9.1, and finally to 9.2
* I've done recent tuning to the planner cost constants in order to favor
index scans over table scans in more situations.

I've looked through past error logs and I found that I had a segfault in
the server process while using PG 8.4 about a dozen times over a 12 month
period. Each time one client's postgres process crashes, the backend
forcefully closes all active connections due to possiby corrupted memory
and then restarts. This leaves all active clients stranded as the
connection is closed, and all cursors and savepoint info is lost. My app
doesn't recover gracefully, and users are forced to click through some
cryptic error messages from the application framework used for the app
(Clarion) and then restart the app.

A few months ago, I upgraded to another server with PG 9.1, assuming that
the issue with the previous server with 8.4 was due to bad RAM, as I did
observe a high ECC error count on the previous system as logged by the IPMI
controller. However, I very quickly had another segfault on the new system
with PG9.1. The default settings of the OS and PG init script disabled core
dumps, so I only started collecting core files for the past few months.

I posted details in pgsql-general April 10, 2013; here is a link to the
thread:
http://www.postgresql.org/message-id/CAPmZXM03MEDEn6nqqf_Phs3M1DK-EaXP5_K-LmirneOJMAQ-Hg@mail.gmail.com

The crash always is some variation of the following stack, as observed in
both PG91 and PG92 crashes:
(gdb) bt
#0  ResourceOwnerEnlargeCatCacheRefs (owner=0x0) at resowner.c:603
#1  0x000000000070f372 in SearchCatCache (cache=0x27fab90, v1=<value
optimized out>, v2=<value optimized out>, v3=<value optimized out>,
v4=<value optimized out>) at catcache.c:1136
#2  0x000000000071b1ae in getTypeOutputInfo (type=20, typOutput=0x2b3db80,
typIsVarlena=0x2b3db88 "") at lsyscache.c:2482
#3  0x000000000045d127 in printtup_prepare_info (myState=0x2810290,
typeinfo=0x29ad7b0, numAttrs=42) at printtup.c:263
#4  0x000000000045d4c4 in printtup (slot=0x3469650, self=0x2810290) at
printtup.c:297
#5  0x000000000065a76a in RunFromStore (portal=0x285fed0, direction=<value
optimized out>, count=10, dest=0x2810290) at pquery.c:1122
#6  0x000000000065a852 in PortalRunSelect (portal=0x285fed0, forward=<value
optimized out>, count=10, dest=0x2810290) at pquery.c:940
#7  0x000000000065bcf8 in PortalRun (portal=0x285fed0, count=10,
isTopLevel=1 '\001', dest=0x2810290, altdest=0x2810290,
completionTag=0x7fffee67f7c0 "") at pquery.c:788
#8  0x0000000000659552 in exec_execute_message (argc=<value optimized out>,
argv=<value optimized out>, dbname=0x2768370 "DBNAME", username=<value
optimized out>) at postgres.c:1929
#9  PostgresMain (argc=<value optimized out>, argv=<value optimized out>,
dbname=0x2768370 "DBNAME", username=<value optimized out>) at
postgres.c:4016
#10 0x0000000000615161 in BackendRun () at postmaster.c:3614
#11 BackendStartup () at postmaster.c:3304
#12 ServerLoop () at postmaster.c:1367
#13 0x0000000000617dcc in PostmasterMain (argc=<value optimized out>,
argv=<value optimized out>) at postmaster.c:1127
#14 0x00000000005b6830 in main (argc=5, argv=0x2766480) at main.c:199
(gdb)

Andres Freund and Tom Lane spent some time tracking down possible root
causes with some of the following summaries:

Tom Lane writes:
|Andres Freund <andres@2ndquadrant.com> writes:
|> Tom: It looks to me like printtup_prepare_info won't normally be called
|> in an held cursor. But if some concurrent DDL changed the number of
|> columns in typeinfo vs thaose in the the receiver that could explain the
|> issue and why its not seen all the time, right?
|
|It looks to me like there are probably two triggering conditions:
|
|1. Client is doing a direct protocol Execute on a held-cursor portal.
|
|2. Cache flush occurs to drop the syscache entries needed by
|getTypeOutputInfo.  (Otherwise, they'd still be around from when the
|held cursor was created.)
|
|The first of these explains why we don't see it in the
|CLOBBER_CACHE_ALWAYS buildfarm runs, and the second one explains why
|Joshua is only seeing it intermittently and not every darn time his
|application does that.

Andres Freund writes:
|On 2013-04-10 19:06:12 -0400, Tom Lane wrote:
|> I wrote:
|> > (Wanders away wondering just how much the regression tests exercise
|> > holdable cursors.)
|>
|> And the answer is they're not testing this code path at all, because if
|> you do
|>       DECLARE c CURSOR WITH HOLD FOR ...
|>       FETCH ALL FROM c;
|> then the second query executes with a portal (and resource owner)
|> created to execute the FETCH command, not directly on the held portal.
|>
|> After a little bit of thought I'm not sure it's even possible to
|> reproduce this problem with libpq, because it doesn't expose any way to
|> issue a bare protocol Execute command against a pre-existing portal.
|> (I had thought psqlOBC went through libpq, but maybe it's playing some
|> games here.)
|
|Hm. PQexecPrepared() looks like it can do that.

I didn't find that any bugs were officially reported/documented, so I
wanted to followup with more detail and follow the guidelines here:
http://www.postgresql.org/docs/current/static/bug-reporting.html
Specifically, I decided to update to the latest stable release, currently
PG9.2.4.

After updating to PG92 three days ago, I've had two of the same segfaults.
Either I'm very (un)lucky or the intermittant issue is occurring more
frequently with PG92.

On a test machine running a snapshort of the production database, and with
the backend configured to log all queries, I have an example of the type of
traffic that client apps will perform:
https://dl.dropboxusercontent.com/u/4715250/pg92_psqlODBC_08.04.0200.txt
In the above logfile, the logfile was cleared, the client app was started,
A window was opened, it was closed, and the app was closed. I installed
pgODBC version 09.01.0200, and aside from cursor names and timestamps
differing, all other lines were identical.

I do not currently have a testing environment which can emulate the
production environment enough to trigger this bug. I've never had this
crash outside of the production environment.

Any thoughts/suggestions?

Regards,
-Joshua

[Attachment #3 (text/html)]

<div dir="ltr">Hi Groups,<div><br></div><div><div>I&#39;m dealing with periodic \
backend process segmentation faults. I&#39;m posting to both the bugs and odbc lists \
as it seems that my application&#39;s use of pgsqlODBC triggers a bug in the postgres \
backend.</div> <div><br></div><div>The environment is:</div><div>Clients: win32 \
clients using various version of the psqlODBC driver (mostly using 8.04.0200). The \
connection string contains &quot;UseDeclareFetch=1&quot; which causes long idle \
transactions, heavy cursor and savepoint use.</div> <div>Server: Dell dual Xeon x86 \
48GB RAM (12GB PG shared mem) RHEL system with Postgresql 9.2.4. Note that these same \
issues occurred on PG9.1 and PG8.4.</div><div><br></div><div>I&#39;ve experienced \
these issues for over a year now. However, during that time several things have \
changed which may or may not be related:  </div> <div>* The schema has been modified \
with heavier use of triggers to update manually created materialized views (ie not \
using the 9.3 CREATE MATERIALIZED VIEW).</div><div>* The number of concurrent users \
has increased (from perhaps 15 concurrent users two years ago, to 30 concurrent users \
now).</div> <div style>* The PG version used has changed from 8.4, to 9.1, and \
finally to 9.2</div><div>* I&#39;ve done recent tuning to the planner cost constants \
in order to favor index scans over table scans in more situations.</div> \
<div><br></div><div>I&#39;ve looked through past error logs and I found that I had a \
segfault in the server process while using PG 8.4 about a dozen times over a 12 month \
period. Each time one client&#39;s postgres process crashes, the backend forcefully \
closes all active connections due to possiby corrupted memory and then restarts. This \
leaves all active clients stranded as the connection is closed, and all cursors and \
savepoint info is lost. My app doesn&#39;t recover gracefully, and users are forced \
to click through some cryptic error messages from the application framework used for \
the app (Clarion) and then restart the app.</div> <div><br></div><div>A few months \
ago, I upgraded to another server with PG 9.1, assuming that the issue with the \
previous server with 8.4 was due to bad RAM, as I did observe a high ECC error count \
on the previous system as logged by the IPMI controller. However, I very quickly had \
another segfault on the new system with PG9.1. The default settings of the OS and PG \
init script disabled core dumps, so I only started collecting core files for the past \
few months.</div> <div><br></div><div>I posted details in pgsql-general April 10, \
2013; here is a link to the thread: <a \
href="http://www.postgresql.org/message-id/CAPmZXM03MEDEn6nqqf_Phs3M1DK-EaXP5_K-Lmirne \
OJMAQ-Hg@mail.gmail.com">http://www.postgresql.org/message-id/CAPmZXM03MEDEn6nqqf_Phs3M1DK-EaXP5_K-LmirneOJMAQ-Hg@mail.gmail.com</a></div>
 <div><br></div><div>The crash always is some variation of the following stack, as \
observed in both PG91 and PG92 crashes:</div><div>(gdb) bt</div><div>#0   \
ResourceOwnerEnlargeCatCacheRefs (owner=0x0) at resowner.c:603</div> <div>#1   \
0x000000000070f372 in SearchCatCache (cache=0x27fab90, v1=&lt;value optimized \
out&gt;, v2=&lt;value optimized out&gt;, v3=&lt;value optimized out&gt;, v4=&lt;value \
optimized out&gt;) at catcache.c:1136</div><div> #2   0x000000000071b1ae in \
getTypeOutputInfo (type=20, typOutput=0x2b3db80, typIsVarlena=0x2b3db88 &quot;&quot;) \
at lsyscache.c:2482</div><div>#3   0x000000000045d127 in printtup_prepare_info \
(myState=0x2810290, typeinfo=0x29ad7b0, numAttrs=42) at printtup.c:263</div> <div>#4  \
0x000000000045d4c4 in printtup (slot=0x3469650, self=0x2810290) at \
printtup.c:297</div><div>#5   0x000000000065a76a in RunFromStore (portal=0x285fed0, \
direction=&lt;value optimized out&gt;, count=10, dest=0x2810290) at \
pquery.c:1122</div> <div>#6   0x000000000065a852 in PortalRunSelect \
(portal=0x285fed0, forward=&lt;value optimized out&gt;, count=10, dest=0x2810290) at \
pquery.c:940</div><div>#7   0x000000000065bcf8 in PortalRun (portal=0x285fed0, \
count=10, isTopLevel=1 &#39;\001&#39;, dest=0x2810290, altdest=0x2810290, \
completionTag=0x7fffee67f7c0 &quot;&quot;) at pquery.c:788</div> <div>#8   \
0x0000000000659552 in exec_execute_message (argc=&lt;value optimized out&gt;, \
argv=&lt;value optimized out&gt;, dbname=0x2768370 &quot;DBNAME&quot;, \
username=&lt;value optimized out&gt;) at postgres.c:1929</div><div> #9   PostgresMain \
(argc=&lt;value optimized out&gt;, argv=&lt;value optimized out&gt;, dbname=0x2768370 \
&quot;DBNAME&quot;, username=&lt;value optimized out&gt;) at \
postgres.c:4016</div><div>#10 0x0000000000615161 in BackendRun () at \
postmaster.c:3614</div> <div>#11 BackendStartup () at postmaster.c:3304</div><div>#12 \
ServerLoop () at postmaster.c:1367</div><div>#13 0x0000000000617dcc in PostmasterMain \
(argc=&lt;value optimized out&gt;, argv=&lt;value optimized out&gt;) at \
postmaster.c:1127</div> <div>#14 0x00000000005b6830 in main (argc=5, argv=0x2766480) \
at main.c:199</div><div>(gdb)</div><div><br></div><div>Andres Freund and Tom Lane \
spent some time tracking down possible root causes with some of the following \
summaries:</div> <div><br></div><div>Tom Lane writes:</div><div>|Andres Freund &lt;<a \
href="mailto:andres@2ndquadrant.com">andres@2ndquadrant.com</a>&gt; \
writes:</div><div>|&gt; Tom: It looks to me like printtup_prepare_info won&#39;t \
normally be called</div> <div>|&gt; in an held cursor. But if some concurrent DDL \
changed the number of</div><div>|&gt; columns in typeinfo vs thaose in the the \
receiver that could explain the</div><div>|&gt; issue and why its not seen all the \
time, right?</div> <div>|  </div><div>|It looks to me like there are probably two \
triggering conditions:</div><div>|</div><div>|1. Client is doing a direct protocol \
Execute on a held-cursor portal.</div><div>|</div><div>|2. Cache flush occurs to drop \
the syscache entries needed by</div> <div>|getTypeOutputInfo.   (Otherwise, \
they&#39;d still be around from when the</div><div>|held cursor was \
created.)</div><div>|</div><div>|The first of these explains why we don&#39;t see it \
in the</div><div>|CLOBBER_CACHE_ALWAYS buildfarm runs, and the second one explains \
why</div> <div>|Joshua is only seeing it intermittently and not every darn time \
his</div><div>|application does that.</div><div><br></div><div>Andres Freund \
writes:</div><div>|On 2013-04-10 19:06:12 -0400, Tom Lane wrote:</div><div> |&gt; I \
wrote:</div><div>|&gt; &gt; (Wanders away wondering just how much the regression \
tests exercise</div><div>|&gt; &gt; holdable \
cursors.)</div><div>|&gt;</div><div>|&gt; And the answer is they&#39;re not testing \
this code path at all, because if</div> <div>|&gt; you do</div><div>|&gt;          \
DECLARE c CURSOR WITH HOLD FOR ...</div><div>|&gt;          FETCH ALL FROM \
c;</div><div>|&gt; then the second query executes with a portal (and resource \
owner)</div><div>|&gt; created to execute the FETCH command, not directly on the held \
portal.</div> <div>|&gt;</div><div>|&gt; After a little bit of thought I&#39;m not \
sure it&#39;s even possible to</div><div>|&gt; reproduce this problem with libpq, \
because it doesn&#39;t expose any way to</div><div>|&gt; issue a bare protocol \
Execute command against a pre-existing portal.</div> <div>|&gt; (I had thought \
psqlOBC went through libpq, but maybe it&#39;s playing some</div><div>|&gt; games \
here.)</div><div>|</div><div>|Hm. PQexecPrepared() looks like it can do \
that.</div><div><br></div><div>I didn&#39;t find that any bugs were officially \
reported/documented, so I wanted to followup with more detail and follow the \
guidelines here: <a href="http://www.postgresql.org/docs/current/static/bug-reporting.html">http://www.postgresql.org/docs/current/static/bug-reporting.html</a></div>
 <div>Specifically, I decided to update to the latest stable release, currently \
PG9.2.4.</div><div><br></div><div>After updating to PG92 three days ago, I&#39;ve had \
two of the same segfaults. Either I&#39;m very (un)lucky or the intermittant issue is \
occurring more frequently with PG92.</div> <div><br></div><div>On a test machine \
running a snapshort of the production database, and with the backend configured to \
log all queries, I have an example of the type of traffic that client apps will \
perform: <a href="https://dl.dropboxusercontent.com/u/4715250/pg92_psqlODBC_08.04.0200 \
.txt">https://dl.dropboxusercontent.com/u/4715250/pg92_psqlODBC_08.04.0200.txt</a></div>
 <div>In the above logfile, the logfile was cleared, the client app was started, A \
window was opened, it was closed, and the app was closed. I installed pgODBC version \
09.01.0200, and aside from cursor names and timestamps differing, all other lines \
were identical.</div> <div><br></div><div>I do not currently have a testing \
environment which can emulate the production environment enough to trigger this bug. \
I&#39;ve never had this crash outside of the production environment.</div><div><br> \
</div><div>Any thoughts/suggestions?</div></div><div><br></div><div \
style>Regards,</div><div style>-Joshua</div></div>



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic