[prev in list] [next in list] [prev in thread] [next in thread] 

List:       ceph-users
Subject:    [ceph-users] Re: [EXTERNAL] RE: OSDs flapping with "_open_alloc loaded 132 GiB in 2930776 extents av
From:       Stefan Kooman <stefan () bit ! nl>
Date:       2021-09-30 18:56:33
Message-ID: 2699c58a-9d16-f7e0-d79d-4712c6679e63 () bit ! nl
[Download RAW message or body]

Hi,

On 9/30/21 18:02, Igor Fedotov wrote:

> Using non-default min_alloc_size is generally not recommended. Primarily 
> due to perfomance penalties. Some side effects (like your ones) can be 
> observed as well. That's simple - non-default parameters generally mean 
> much worse QA coverage devs and less adaptation/experience from users. 
> Hence they're risky.
> 
> It's Pacific release what allows to use 4K min_alloc_size 
> [almost/hopefully] without such penalties.

We have been using "bluestore_min_alloc_size_ssd" 4k since Luminous 
12.2.2 (bluefs_alloc_size": "1048576", "bluefs_shared_alloc_size": 
"65536). In luminous we first used the (then) default allocator 
"stupid", and later switched to bitmap, that we are still using today 
(because of issues with hybrid allocator that by now have hopefully all 
been fixed).

Fortunately we haven't run into any such issues, probably because we 
still use bitmap allocator.

At some point in time (Pacific?) we will redploy all OSDs with hybrid 
allocator.


> 3) reduce main space space fragmentation by using Hybrid allocator from 
> scratch - OSD redeployment is required as well.
> 
>> We deployed these clusters at nautilus with the default allocator, 
>> which was bitmap I think? After redeploying condor on octopus, it 
>> seems to be running the hybrid allocator now.   So I think we've 
>> inadvertently already carried out this action on condor.

So the preliminary take away for now seems to be that clusters running 
with min_alloc size 4K is to stick with bitmap until pacific and then 
re-deploy with hybrid allocator.

Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic