[prev in list] [next in list] [prev in thread] [next in thread]
List: ceph-users
Subject: [ceph-users] Re: bucket_index_max_shards vs. no resharding in multisite? How to brace RADOS for huge
From: Christian Rohmann <christian.rohmann () inovex ! de>
Date: 2021-09-30 15:47:39
Message-ID: 4a7f9b8d-af9f-0caf-2cf2-322efc387ad9 () inovex ! de
[Download RAW message or body]
On 30/09/2021 17:02, Christian Rohmann wrote:
>
> Looking at my zones I can see that the master zone (converted from
> previously single-site setup) has
>
>> bucket_index_max_shards=0
>
> while the other, secondary zone has
>> bucket_index_max_shards=11
>
> Should I align this and use "11" as the default static number of
> shards for all new buckets then?
> Maybe an even higher (prime) number just to be save?
Reading
https://docs.ceph.com/en/octopus/install/ceph-deploy/install-ceph-gateway/#configure-bucket-sharding
again,
it seems there are some instructions on editing the zonegroup JSON to
set bucket_index_max_shards to something sensible.
Unfortunately there is no word about this in the mutisite conversion
section
(https://docs.ceph.com/en/octopus/radosgw/multisite/#migrating-a-single-site-system-to-multi-site)
- maybe this would be sensible to ensure folks converting to a multisite
setup don't end up with huge unsharded bucket indices which also cannot
be resharded or even are resharded automatically.
Regards
Christian
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic