[prev in list] [next in list] [prev in thread] [next in thread] 

List:       postgresql-general
Subject:    Re: typical active table count?
From:       Ben Chobot <bench () silentmedia ! com>
Date:       2023-06-28 14:24:12
Message-ID: a40cdd17-5875-5954-6fb7-adfa4a879ed9 () silentmedia ! com
[Download RAW message or body]

Jeremy Schneider wrote on 6/27/23 11:47 AM:
> Thank Ben, not a concern but I'm trying to better understand how common
> this might be. And I think sharing general statistics about how people
> use PostgreSQL is a great help to the developers who build and maintain it.
>
> One really nice thing about PostgreSQL is that two quick copies of
> pg_stat_all_tables and you can easily see this sort of info.
>
> If you have a database where more than 100 tables are updated within a
> 10 second period - this seems really uncommon to me - I'm very curious
> about the workload.

Well, in our case we have a SaaS model where a moderately complicated 
schema is replicated hundreds of times per db. It doesn't take much load 
to end up scattering writes across many tables (not to mention their 
indices). We do have table partitioning too, but it's a relatively small 
part of our schema and the partitioning is done by date, so we really 
only have one hot partition at a time. FWIW, most of our dbs have 32 cores.

All that aside, as others have said there are many reasonable ways to 
reach the threshold you have set.

[Attachment #3 (text/html)]

<html theme="default-light" iconset="color"><head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head><body style="font-family: Hoefler Text;" text="#000000"><div 
style="font-family: Hoefler Text;"><span>Jeremy Schneider wrote on 
6/27/23 11:47 AM:</span><br><blockquote type="cite" 
cite="mid:d9095fde-ad79-76f9-5030-f2195aee34ed@ardentperf.com"><pre wrap="">Thank \
Ben, not a concern but I'm trying to better understand how common this might be. And \
I think sharing general statistics about how people use PostgreSQL is a great help to \
the developers who build and maintain it.

One really nice thing about PostgreSQL is that two quick copies of
pg_stat_all_tables and you can easily see this sort of info.

If you have a database where more than 100 tables are updated within a
10 second period - this seems really uncommon to me - I'm very curious
about the workload.</pre></blockquote><br>Well, in <span 
style="font-style: italic;">our</span> case we have a SaaS model where a
 moderately complicated schema is replicated hundreds of times per db. 
It doesn't take much load to end up scattering writes across many tables
 (not to mention their indices). We do have table partitioning too, but 
it's a relatively small part of our schema and the partitioning is done 
by date, so we really only have one hot partition at a time. FWIW, most 
of our dbs have 32 cores.<br><br>All that aside, as others have said 
there are many reasonable ways to reach the threshold you have \
set.<br></div></body></html>



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic