[prev in list] [next in list] [prev in thread] [next in thread]
List: postgresql-general
Subject: Re: [HACKERS] Why we panic in pglz_decompress
From: Zdenek Kotala <Zdenek.Kotala () Sun ! COM>
Date: 2008-02-29 16:25:10
Message-ID: 47C831E6.5080209 () sun ! com
[Download RAW message or body]
Tom Lane napsal(a):
> Alvaro Herrera <alvherre@commandprompt.com> writes:
>> Zdenek Kotala wrote:
>>> I'm now looking into toast code and I found following code in
>>> pglz_decompress:
>>>
>>> 00704 if (destsize != source->rawsize)
>>> 00705 elog(destsize > source->rawsize ? FATAL : ERROR,
>>> 00706 "compressed data is corrupt");
>>>
>>>
>>> I'm surprise why we there panic?
>
>> Agreed, FATAL is too strong.
>
> Did either of you read the comment just before this code? The reason
> it's panicing is that it's possibly already tromped on some critical
> data structure inside the backend.
Yes I did, but if you know how big memory you have for uncompress data
you can check a boundaries. It is better then overwrite a data in
memory. Yes, it little bit slow down a routine but you will able work
with a table.
>>> My idea is to improve this piece of code and move error logging to
>>> callers (heap_tuple_untoast_attr() and heap_tuple_untoast_attr_slice())
>>> where we have a little bit more details (especially for external
>>> storage).
>
>> Why move it? Just adding errcontext in the callers should be enough.
>
> AFAIR this error has never once been reported from the field, so I don't
> see the point of investing a lot of effort in it.
Please, increment a counter :-). I'm now analyzing one core file and it
fails finally in elog function (called from pglz_decompress), because
memory was overwritten -> no error message in a log file. :(
Zdenek
---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic