[prev in list] [next in list] [prev in thread] [next in thread] 

List:       ceph-users
Subject:    [ceph-users] Re: fault tolerant about erasure code pool
From:       Lindsay Mathieson <lindsay.mathieson () gmail ! com>
Date:       2020-06-26 10:03:09
Message-ID: 048846bf-c08b-ca3e-c94c-9de4b468deb4 () gmail ! com
[Download RAW message or body]

On 26/06/2020 6:31 pm, Zhenshi Zhou wrote:
> I'm going to deploy a cluster with erasure code pool for cold storage.
> There are 3 servers for me to set up the cluster, 12 OSDs on each server.
> Does that mean the data is secure while 1/3 OSDs of the cluster is down,
> or only 2 of the OSDs is down , if I set the ec profile with k=4 and m=2.

Default failure mode is by host, so ec(4,2) needs a minimum of 6 hosts 
and while that could could function with up to two hosts down, it would 
be unable to rebalance.


3 hosts only supports k=2, m=1 - not recommend ;)

-- 
Lindsay
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic