[prev in list] [next in list] [prev in thread] [next in thread]
List: pgsql-performance
Subject: Fwd: amazon aroura config - seriously overcommited defaults? (May be Off Topic)
From: Square Bob <square_bob () yahoo ! com>
Date: 2018-12-08 19:03:27
Message-ID: b76d913c-010b-39b3-3342-5db083536415 () yahoo ! com
[Download RAW message or body]
This question is probably more of a fit for the performance list, sorry
for the cross post
-------- Forwarded Message --------
Subject: amazon aroura config - seriously overcommited defaults? (May
be Off Topic)
Date: Sat, 8 Dec 2018 12:00:33 -0700
From: Square Bob <square_bob@yahoo.com>
To: pgsql-general@lists.postgresql.org
All;
My apologies if this is off topic.
Our company is moving to Aurora, In the past I would take care not to
allow postgresql to over-commit memory beyond the actual memory on the
server, which meant I would add the buffer pool + (work_mem *
max_connections) + (maintenance_work_mem * autovacuum threads)
However as I look at the aroura defaults they are all off the charts,
for example, based on the calculations in the config (amazon doesn't
make it easy, some settings are in pages, some are in kb, some are who
knows what) I see the following settings as default in our aroura config:
The instance size is db.r4.xlarge
this instance size is listed as having 30.5GB of ram
Here's the default settings:
shared_buffers: {DBInstanceClassMemory/10922}
which equates to 24GB
work_mem: 64000 (kb)
which equates to 65.5MB
maintenance_work_mem: GREATEST({DBInstanceClassMemory/63963136*1024},65536)
which equates to 4.2GB
max_connections: LEAST({DBInstanceClassMemory/9531392},5000)
which equates to 3,380
According to my math (If I got it right) in a worst case scenario,
if we maxed out max_connections, work_mem and maintenance_work_mem limits
the db would request 247GB of memory
Additionally amazon has set effective_cache_size =
{DBInstanceClassMemory/10922}
which equates to about 2.9MB (which given the other outlandish setting
may be the only appropriate setting in the system)
What the hell is amazon doing here? Am I missing the boat on tuning
postgresql memory? Is amazon simply counting on the bet that users will
never fully utilize an instance?
Thanks in advance
[Attachment #3 (text/html)]
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>This question is probably more of a fit for the performance list,
sorry for the cross post<br>
</p>
<div class="moz-forward-container"><br>
<br>
-------- Forwarded Message --------
<table class="moz-email-headers-table" cellspacing="0"
cellpadding="0" border="0">
<tbody>
<tr>
<th valign="BASELINE" nowrap="nowrap" align="RIGHT">Subject:
</th>
<td>amazon aroura config - seriously overcommited defaults?
(May be Off Topic)</td>
</tr>
<tr>
<th valign="BASELINE" nowrap="nowrap" align="RIGHT">Date: </th>
<td>Sat, 8 Dec 2018 12:00:33 -0700</td>
</tr>
<tr>
<th valign="BASELINE" nowrap="nowrap" align="RIGHT">From: </th>
<td>Square Bob <a class="moz-txt-link-rfc2396E" \
href="mailto:square_bob@yahoo.com"><square_bob@yahoo.com></a></td> </tr>
<tr>
<th valign="BASELINE" nowrap="nowrap" align="RIGHT">To: </th>
<td><a class="moz-txt-link-abbreviated" \
href="mailto:pgsql-general@lists.postgresql.org">pgsql-general@lists.postgresql.org</a></td>
</tr>
</tbody>
</table>
<br>
<br>
All;<br>
<br>
<br>
My apologies if this is off topic.<br>
<br>
<br>
Our company is moving to Aurora, In the past I would take care not
to allow postgresql to over-commit memory beyond the actual memory
on the server, which meant I would add the buffer pool + (work_mem
* max_connections) + (maintenance_work_mem * autovacuum threads)<br>
<br>
<br>
However as I look at the aroura defaults they are all off the
charts, for example, based on the calculations in the config
(amazon doesn't make it easy, some settings are in pages, some are
in kb, some are who knows what) I see the following settings as
default in our aroura config:<br>
<br>
<br>
The instance size is db.r4.xlarge<br>
<br>
<br>
this instance size is listed as having 30.5GB of ram<br>
<br>
<br>
Here's the default settings:<br>
<br>
<br>
shared_buffers: {DBInstanceClassMemory/10922}<br>
<br>
which equates to 24GB<br>
<br>
<br>
work_mem: 64000 (kb)<br>
<br>
which equates to 65.5MB<br>
<br>
<br>
maintenance_work_mem:
GREATEST({DBInstanceClassMemory/63963136*1024},65536)<br>
<br>
which equates to 4.2GB<br>
<br>
<br>
max_connections: LEAST({DBInstanceClassMemory/9531392},5000)<br>
<br>
which equates to 3,380<br>
<br>
<br>
According to my math (If I got it right) in a worst case
scenario,<br>
<br>
if we maxed out max_connections, work_mem and maintenance_work_mem
limits<br>
<br>
the db would request 247GB of memory<br>
<br>
<br>
Additionally amazon has set effective_cache_size =<br>
{DBInstanceClassMemory/10922}<br>
<br>
which equates to about 2.9MB (which given the other outlandish
setting may be the only appropriate setting in the system)<br>
<br>
<br>
<br>
What the hell is amazon doing here? Am I missing the boat on
tuning postgresql memory? Is amazon simply counting on the bet
that users will never fully utilize an instance?<br>
<br>
<br>
Thanks in advance<br>
<br>
<br>
<br>
</div>
</body>
</html>
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic