[prev in list] [next in list] [prev in thread] [next in thread] 

List:       ceph-users
Subject:    Re: [ceph-users] Large OMAP Objects in zone.rgw.log pool
From:       Brett Chancellor <bchancellor () salesforce ! com>
Date:       2019-07-31 18:47:22
Message-ID: CADuVtVCST7zJHModmOeoQ1q7Bex83YHBsuJFAHoZd5C9bLWM3A () mail ! gmail ! com
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


I was able to answer my own question. For future interested parties, I
initiated a deep scrub on the placement group, which cleared the error.

On Tue, Jul 30, 2019 at 1:48 PM Brett Chancellor <bchancellor@salesforce.com>
wrote:

> I was able to remove the meta objects, but the cluster is still in WARN
> state
> HEALTH_WARN 1 large omap objects
> LARGE_OMAP_OBJECTS 1 large omap objects
>     1 large objects found in pool 'us-prd-1.rgw.log'
>     Search the cluster log for 'Large omap object found' for more details.
>
> How do I go about clearing it out? I don't see any other references to
> large omap in any of the logs.  I've tried restarted the mgr's, the
> monitors, and even the osd that reported the issue.
>
> -Brett
>
> On Thu, Jul 25, 2019 at 2:55 PM Brett Chancellor <
> bchancellor@salesforce.com> wrote:
>
>> 14.2.1
>> Thanks, I'll try that.
>>
>> On Thu, Jul 25, 2019 at 2:54 PM Casey Bodley <cbodley@redhat.com> wrote:
>>
>>> What ceph version is this cluster running? Luminous or later should not
>>> be writing any new meta.log entries when it detects a single-zone
>>> configuration.
>>>
>>> I'd recommend editing your zonegroup configuration (via 'radosgw-admin
>>> zonegroup get' and 'put') to set both log_meta and log_data to false,
>>> then commit the change with 'radosgw-admin period update --commit'.
>>>
>>> You can then delete any meta.log.* and data_log.* objects from your log
>>> pool using the rados tool.
>>>
>>> On 7/25/19 2:30 PM, Brett Chancellor wrote:
>>> > Casey,
>>> >   These clusters were setup with the intention of one day doing multi
>>> > site replication. That has never happened. The cluster has a single
>>> > realm, which contains a single zonegroup, and that zonegroup contains
>>> > a single zone.
>>> >
>>> > -Brett
>>> >
>>> > On Thu, Jul 25, 2019 at 2:16 PM Casey Bodley <cbodley@redhat.com
>>> > <mailto:cbodley@redhat.com>> wrote:
>>> >
>>> >     Hi Brett,
>>> >
>>> >     These meta.log objects store the replication logs for metadata
>>> >     sync in
>>> >     multisite. Log entries are trimmed automatically once all other
>>> zones
>>> >     have processed them. Can you verify that all zones in the multisite
>>> >     configuration are reachable and syncing? Does 'radosgw-admin sync
>>> >     status' on any zone show that it's stuck behind on metadata sync?
>>> >     That
>>> >     would prevent these logs from being trimmed and result in these
>>> large
>>> >     omap warnings.
>>> >
>>> >     On 7/25/19 1:59 PM, Brett Chancellor wrote:
>>> >     > I'm having an issue similar to
>>> >     >
>>> >
>>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-March/033611.html
>>>  .
>>> >
>>> >     > I don't see where any solution was proposed.
>>> >     >
>>> >     > $ ceph health detail
>>> >     > HEALTH_WARN 1 large omap objects
>>> >     > LARGE_OMAP_OBJECTS 1 large omap objects
>>> >     >     1 large objects found in pool 'us-prd-1.rgw.log'
>>> >     >     Search the cluster log for 'Large omap object found' for
>>> >     more details.
>>> >     >
>>> >     > $ grep "Large omap object" /var/log/ceph/ceph.log
>>> >     > 2019-07-25 14:58:21.758321 osd.3 (osd.3) 15 : cluster [WRN]
>>> >     Large omap
>>> >     > object found. Object:
>>> >     >
>>> 51:61eb35fe:::meta.log.e557cf47-46df-4b45-988e-9a94c5004a2e.19:head
>>> >     > Key count: 3382154 Size (bytes): 611384043
>>> >     >
>>> >     > $ rados -p us-prd-1.rgw.log listomapkeys
>>> >     > meta.log.e557cf47-46df-4b45-988e-9a94c5004a2e.19 |wc -l
>>> >     > 3382154
>>> >     >
>>> >     > $ rados -p us-prd-1.rgw.log listomapvals
>>> >     > meta.log.e557cf47-46df-4b45-988e-9a94c5004a2e.19
>>> >     > This returns entries from almost every bucket, across multiple
>>> >     > tenants. Several of the entries are from buckets that no longer
>>> >     exist
>>> >     > on the system.
>>> >     >
>>> >     > $ ceph df |egrep 'OBJECTS|.rgw.log'
>>> >     >     POOL        ID      STORED      OBJECTS     USED    %USED MAX
>>> >     > AVAIL
>>> >     >     us-prd-1.rgw.log                 51     758 MiB 228   758 MiB
>>> >     >       0       102 TiB
>>> >     >
>>> >     > Thanks,
>>> >     >
>>> >     > -Brett
>>> >     >
>>> >     > _______________________________________________
>>> >     > ceph-users mailing list
>>> >     > ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>>> >     > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> >     _______________________________________________
>>> >     ceph-users mailing list
>>> >     ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>>> >     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> >
>>>
>>

[Attachment #5 (text/html)]

<div dir="ltr">I was able to answer my own question. For future interested parties, I \
initiated a deep scrub on the placement group, which cleared the error.</div><br><div \
class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Jul 30, 2019 at 1:48 PM \
Brett Chancellor &lt;<a \
href="mailto:bchancellor@salesforce.com">bchancellor@salesforce.com</a>&gt; \
wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px \
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">I was \
able to remove the meta objects, but the cluster is still in WARN state<div><font \
face="courier new, monospace">HEALTH_WARN 1 large omap objects<br>LARGE_OMAP_OBJECTS \
1 large omap objects<br>      1 large objects found in pool \
&#39;us-prd-1.rgw.log&#39;<br>      Search the cluster log for &#39;Large omap object \
found&#39; for more details.</font><br><font face="courier new, monospace">  \
</font><br>How do I go about clearing it out? I don&#39;t see any other references to \
large omap in any of the logs.   I&#39;ve tried restarted the mgr&#39;s, the \
monitors, and even the osd that reported the \
issue.</div><div><br></div><div>-Brett</div></div><br><div class="gmail_quote"><div \
dir="ltr" class="gmail_attr">On Thu, Jul 25, 2019 at 2:55 PM Brett Chancellor &lt;<a \
href="mailto:bchancellor@salesforce.com" \
target="_blank">bchancellor@salesforce.com</a>&gt; wrote:<br></div><blockquote \
class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div dir="ltr">14.2.1<div>Thanks, I&#39;ll try \
that.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On \
Thu, Jul 25, 2019 at 2:54 PM Casey Bodley &lt;<a href="mailto:cbodley@redhat.com" \
target="_blank">cbodley@redhat.com</a>&gt; wrote:<br></div><blockquote \
class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex">What ceph version is this cluster running? \
Luminous or later should not <br> be writing any new meta.log entries when it detects \
a single-zone <br> configuration.<br>
<br>
I&#39;d recommend editing your zonegroup configuration (via &#39;radosgw-admin <br>
zonegroup get&#39; and &#39;put&#39;) to set both log_meta and log_data to false, \
<br> then commit the change with &#39;radosgw-admin period update --commit&#39;.<br>
<br>
You can then delete any meta.log.* and data_log.* objects from your log <br>
pool using the rados tool.<br>
<br>
On 7/25/19 2:30 PM, Brett Chancellor wrote:<br>
&gt; Casey,<br>
&gt;    These clusters were setup with the intention of one day doing multi <br>
&gt; site replication. That has never happened. The cluster has a single <br>
&gt; realm, which contains a single zonegroup, and that zonegroup contains <br>
&gt; a single zone.<br>
&gt;<br>
&gt; -Brett<br>
&gt;<br>
&gt; On Thu, Jul 25, 2019 at 2:16 PM Casey Bodley &lt;<a \
href="mailto:cbodley@redhat.com" target="_blank">cbodley@redhat.com</a> <br> &gt; \
&lt;mailto:<a href="mailto:cbodley@redhat.com" \
target="_blank">cbodley@redhat.com</a>&gt;&gt; wrote:<br> &gt;<br>
&gt;        Hi Brett,<br>
&gt;<br>
&gt;        These meta.log objects store the replication logs for metadata<br>
&gt;        sync in<br>
&gt;        multisite. Log entries are trimmed automatically once all other zones<br>
&gt;        have processed them. Can you verify that all zones in the multisite<br>
&gt;        configuration are reachable and syncing? Does &#39;radosgw-admin sync<br>
&gt;        status&#39; on any zone show that it&#39;s stuck behind on metadata \
sync?<br> &gt;        That<br>
&gt;        would prevent these logs from being trimmed and result in these large<br>
&gt;        omap warnings.<br>
&gt;<br>
&gt;        On 7/25/19 1:59 PM, Brett Chancellor wrote:<br>
&gt;        &gt; I&#39;m having an issue similar to<br>
&gt;        &gt;<br>
&gt;        <a href="http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-March/033611.html" \
rel="noreferrer" target="_blank">http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-March/033611.html</a> \
.<br> &gt;<br>
&gt;        &gt; I don&#39;t see where any solution was proposed.<br>
&gt;        &gt;<br>
&gt;        &gt; $ ceph health detail<br>
&gt;        &gt; HEALTH_WARN 1 large omap objects<br>
&gt;        &gt; LARGE_OMAP_OBJECTS 1 large omap objects<br>
&gt;        &gt;       1 large objects found in pool &#39;us-prd-1.rgw.log&#39;<br>
&gt;        &gt;       Search the cluster log for &#39;Large omap object found&#39; \
for<br> &gt;        more details.<br>
&gt;        &gt;<br>
&gt;        &gt; $ grep &quot;Large omap object&quot; /var/log/ceph/ceph.log<br>
&gt;        &gt; 2019-07-25 14:58:21.758321 osd.3 (osd.3) 15 : cluster [WRN]<br>
&gt;        Large omap<br>
&gt;        &gt; object found. Object:<br>
&gt;        &gt; 51:61eb35fe:::meta.log.e557cf47-46df-4b45-988e-9a94c5004a2e.19:head<br>
 &gt;        &gt; Key count: 3382154 Size (bytes): 611384043<br>
&gt;        &gt;<br>
&gt;        &gt; $ rados -p us-prd-1.rgw.log listomapkeys<br>
&gt;        &gt; meta.log.e557cf47-46df-4b45-988e-9a94c5004a2e.19 |wc -l<br>
&gt;        &gt; 3382154<br>
&gt;        &gt;<br>
&gt;        &gt; $ rados -p us-prd-1.rgw.log listomapvals<br>
&gt;        &gt; meta.log.e557cf47-46df-4b45-988e-9a94c5004a2e.19<br>
&gt;        &gt; This returns entries from almost every bucket, across multiple<br>
&gt;        &gt; tenants. Several of the entries are from buckets that no longer<br>
&gt;        exist<br>
&gt;        &gt; on the system.<br>
&gt;        &gt;<br>
&gt;        &gt; $ ceph df |egrep &#39;OBJECTS|.rgw.log&#39;<br>
&gt;        &gt;       POOL            ID         STORED         OBJECTS       USED   \
%USED MAX<br> &gt;        &gt; AVAIL<br>
&gt;        &gt;       us-prd-1.rgw.log                         51       758 MiB 228  \
758 MiB<br> &gt;        &gt;          0          102 TiB<br>
&gt;        &gt;<br>
&gt;        &gt; Thanks,<br>
&gt;        &gt;<br>
&gt;        &gt; -Brett<br>
&gt;        &gt;<br>
&gt;        &gt; _______________________________________________<br>
&gt;        &gt; ceph-users mailing list<br>
&gt;        &gt; <a href="mailto:ceph-users@lists.ceph.com" \
target="_blank">ceph-users@lists.ceph.com</a> &lt;mailto:<a \
href="mailto:ceph-users@lists.ceph.com" \
target="_blank">ceph-users@lists.ceph.com</a>&gt;<br> &gt;        &gt; <a \
href="http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com" rel="noreferrer" \
target="_blank">http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com</a><br> &gt;   \
_______________________________________________<br> &gt;        ceph-users mailing \
list<br> &gt;        <a href="mailto:ceph-users@lists.ceph.com" \
target="_blank">ceph-users@lists.ceph.com</a> &lt;mailto:<a \
href="mailto:ceph-users@lists.ceph.com" \
target="_blank">ceph-users@lists.ceph.com</a>&gt;<br> &gt;        <a \
href="http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com" rel="noreferrer" \
target="_blank">http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com</a><br> \
&gt;<br> </blockquote></div>
</blockquote></div>
</blockquote></div>



_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic