[prev in list] [next in list] [prev in thread] [next in thread] 

List:       ceph-users
Subject:    [ceph-users] Re: OSD Memory usage
From:       Seena Fallah <seenafallah () gmail ! com>
Date:       2020-11-27 0:47:43
Message-ID: CAK3+OmXxYS8R_mjW43gN0+f7LPM7fUXc2+HwAn2=YvrJ8KaUTQ () mail ! gmail ! com
[Download RAW message or body]

This is what happens with my cluster (Screenshots attached). At 10:11 I
turn on bluefs_buffered_io on all my OSDs and latency gets back but
throughput decreases.
I had these configs for all OSDs in recovery
osd-max-backfills 1
osd-recovery-max-active 1
osd-recovery-op-priority 1

Do you have any idea why with these parameters latency got too much effect?

On Tue, Nov 24, 2020 at 12:42 PM Seena Fallah <seenafallah@gmail.com> wrote:

> I add one OSD node to the cluster and I get 500MB/s throughput over my
> disks and it was 2 or 3 times better than before! but my latency raised 5
> times!!!
> When I enable bluefs_buffered_io the throughput on disks gets 200MB/s and
> my latency gets down!
> Is there any kernel config/tuning that should be used to have correct
> latency without bluefs buffered io?
>
> On Mon, Nov 23, 2020 at 3:52 PM Igor Fedotov <ifedotov@suse.de> wrote:
>
>> Hi Seena,
>>
>> just to note  - this ticket might be relevant.
>>
>> https://tracker.ceph.com/issues/48276
>>
>>
>> Mind leaving a comment there?
>>
>>
>> Thanks,
>>
>> Igor
>>
>> On 11/23/2020 2:51 AM, Seena Fallah wrote:
>> > Now one of my OSDs gets segfault.
>> > Here is the full trace: https://paste.ubuntu.com/p/4KHcCG9YQx/
>> >
>> > On Mon, Nov 23, 2020 at 2:16 AM Seena Fallah <seenafallah@gmail.com>
>> wrote:
>> >
>> >> Hi all,
>> >>
>> >> After I upgrade from 14.2.9 to 14.2.14 my OSDs are using less more
>> memory
>> >> than before! I give each OSD 6GB memory target and before the free
>> memory
>> >> was 20GB and now after 24h from the upgrade I have 104GB free memory of
>> >> 128GB memory! Also, my OSD latency got increases!
>> >> This happens in both SSD and HDD tier.
>> >>
>> >> Are there any notes from the upgrade I missed? Is it related to
>> >> bluefs_buffered_io?
>> >> If BlueFS do a direct IO shouldn't BlueFS/Bluestore use the targeted
>> >> memory for its cache and does it mean before the upgrade the memory
>> used
>> >> was by a kernel that buffers the IO and wasn't for the ceph-osd?
>> >>
>> >> Thanks.
>> >>
>> > _______________________________________________
>> > ceph-users mailing list -- ceph-users@ceph.io
>> > To unsubscribe send an email to ceph-users-leave@ceph.io
>>
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic