[prev in list] [next in list] [prev in thread] [next in thread] 

List:       ceph-users
Subject:    [ceph-users] Re: slow "rados ls"
From:       "Marcel Kuiper" <ceph () mknet ! nl>
Date:       2020-08-31 12:16:18
Message-ID: 4c713e2c96f023b95b1fbff420e71a07.squirrel () webmailcl2 ! ai-hosting ! nl
[Download RAW message or body]

The compaction of the bluestore-kv's helped indeed. The repons is back to
acceptable levels

Thanks for the help

> Thank you Stefan, I'm going to give that a try
>
> Kind Regards
>
> Marcel Kuiper
>
>> On 2020-08-27 13:29, Marcel Kuiper wrote:
>>> Sorry that had to be Wido/Stefan
>>
>> What does "ceph osd df" give you? There is a column with "OMAP" and
>> "META". OMAP is ~ 13 B, META 26 GB in our setup. Quite a few files in
>> cephfs (main reason we have large OMAP).
>>
>>>
>>> Another question is: hoe to use this ceph-kvstore-tool tool to compact
>>> the
>>> rocksdb? (can't find a lot of examples)
>>
>> If you want to do a whole host a the time:
>>
>> systemctl stop ceph-osd.target
>>
>> wait a few seconds till all processes are closed.
>>
>> for osd in `ls /var/lib/ceph/osd/`; do (ceph-kvstore-tool bluestore-kv
>> /var/lib/ceph/osd/$osd compact &);done
>>
>> This works for us (no seperate WAL/DB). Check the help of the
>> ceph-kvstore-tool if you have to do anything special with separate DB /
>> WAL devices.
>>
>> Gr. Stefan
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-leave@ceph.io
>>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-leave@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic