[prev in list] [next in list] [prev in thread] [next in thread] 

List:       ceph-users
Subject:    [ceph-users] Re: Autoscale recommendtion seems to small + it broke my pool...
From:       Lindsay Mathieson <lindsay.mathieson () gmail ! com>
Date:       2020-06-22 23:14:36
Message-ID: 54dca996-5e1c-62c4-ea1a-d5ca4f477789 () gmail ! com
[Download RAW message or body]

Thanks Eugen

On 22/06/2020 10:27 pm, Eugen Block wrote:
> Regarding the inactive PGs, how are your pools configured? Can you share
>
> ceph osd pool ls detail
>
> It could be an issue with min_size (is it also set to 3?).
>

pool 2 'ceph' replicated size 3 min_size 1 crush_rule 0 object_hash 
rjenkins pg_num 512 pgp_num 512 autoscale_mode warn last_change 8516 
lfor 0/5823/5807 flags hashpspool,selfmanaged_snaps stripe_width 0 
compression_algorithm lz4 compression_mode aggressive application rbd
         removed_snaps [1~3,5~2]

Autoscale is no longer recommending a pg_num change (I have it set to 
warn). Back when this happened I was still setting the ceph custer up - 
incrementally moving VM's to the ceph pool and adding OSD's as space was 
freed up on the old storage (lizardfs), so not an ideal setup :) for a 
while the pool was in a constant state of inbalance and redistribution, 
I probably should have disabled the autoscale until I finished.


Finished now:

POOL   SIZE TARGET SIZE RATE RAW CAPACITY  RATIO TARGET RATIO BIAS 
PG_NUM NEW PG_NUM AUTOSCALE
ceph  3230G              3.0       34912G 0.2776               1.0    
512            warn

-- 
Lindsay
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic