[prev in list] [next in list] [prev in thread] [next in thread] 

List:       pgsql-performance
Subject:    Re: [PERFORM] : Tracking Full Table Scans
From:       Venkat Balaji <venkat.balaji () verse ! in>
Date:       2011-09-28 6:25:25
Message-ID: CAFrxt0jKLh3mHgJJ54D=Tdy=4ywTV4nbuEmt6Z0R9ZQhAJO5Ug () mail ! gmail ! com
[Download RAW message or body]

Yes. I am looking for the justified full table scans.

If bigger tables are getting scanned, I would like to know %age rows scanned
against %age rows as the output.

If the query needs 80% of the rows as the output, then a full table scan is
always better.

I believe there is a possibility for this in Postgres. I think we can get
this using pg_stat_user_table, pg_statio_user_tables and pg_stats.

I will post the calculation once it i get it.

Thanks
VB

On Wed, Sep 28, 2011 at 6:25 AM, Craig Ringer <ringerc@ringerc.id.au> wrote:

> On 09/28/2011 12:26 AM, Venkat Balaji wrote:
>
>> Thanks a lot Kevin !!
>>
>> Yes. I intended to track full table scans first to ensure that only
>> small tables or tables with very less pages are (as you said) getting
>> scanned full.
>>
>
> It can also be best to do a full table scan of a big table for some
> queries. If the query needs to touch all the data in a table - for example,
> for an aggregate - then the query will often complete fastest and with less
> disk use by using a sequential scan.
>
> I guess what you'd really want to know is to find out about queries that do
> seqscans to match relatively small fractions of the total tuples scanned, ie
> low-selectivity seqscans. I'm not sure whether or not it's possible to
> gather this data with PostgreSQL's current level of stats detail.
>
> --
> Craig Ringer
>

[Attachment #3 (text/html)]

Yes. I am looking for the justified full table scans.<div><br></div><div>If bigger \
tables are getting scanned, I would like to know %age rows scanned against %age rows \
as the output.</div><div><br></div><div>If the query needs 80% of the rows as the \
output, then a full table scan is always better.</div> <div><br></div><div>I believe \
there is a possibility for this in Postgres. I think we can get this using \
pg_stat_user_table, pg_statio_user_tables and pg_stats.</div><div><br></div><div>I \
will post the calculation once it i get it.</div> \
<div><br></div><div>Thanks</div><div>VB<br><br><div class="gmail_quote">On Wed, Sep \
28, 2011 at 6:25 AM, Craig Ringer <span dir="ltr">&lt;<a \
href="mailto:ringerc@ringerc.id.au">ringerc@ringerc.id.au</a>&gt;</span> wrote:<br> \
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc \
solid;padding-left:1ex;"><div class="im">On 09/28/2011 12:26 AM, Venkat Balaji \
wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px \
#ccc solid;padding-left:1ex"> Thanks a lot Kevin !!<br>
<br>
Yes. I intended to track full table scans first to ensure that only<br>
small tables or tables with very less pages are (as you said) getting<br>
scanned full.<br>
</blockquote>
<br></div>
It can also be best to do a full table scan of a big table for some queries. If the \
query needs to touch all the data in a table - for example, for an aggregate - then \
the query will often complete fastest and with less disk use by using a sequential \
scan.<br>

<br>
I guess what you&#39;d really want to know is to find out about queries that do \
seqscans to match relatively small fractions of the total tuples scanned, ie \
low-selectivity seqscans. I&#39;m not sure whether or not it&#39;s possible to gather \
this data with PostgreSQL&#39;s current level of stats detail.<br> <font \
color="#888888"> <br>
--<br>
Craig Ringer<br>
</font></blockquote></div><br></div>



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic