[prev in list] [next in list] [prev in thread] [next in thread]
List: ceph-users
Subject: Re: [ceph-users] New best practices for osds???
From: Anthony D'Atri <aad () dreamsnake ! net>
Date: 2019-07-27 0:37:41
Message-ID: 25BAF9A2-D68A-446D-B2AF-0451D5AD529E () dreamsnake ! net
[Download RAW message or body]
> This is worse than I feared, but very much in the realm of concerns I
> had with using single-disk RAID0 setups.? Thank you very much for
> posting your experience!? My money would still be on using *high write
> endurance* NVMes for DB/WAL and whatever I could afford for block.?
yw. Of course there are all manner of use-cases and constraints, so others have \
different experiences. Perhaps with the freedom to not use a certain HBA vendor \
things would be somewhat better but in said past life the practice cost hundreds of \
thousands of dollars.
I personally have a low tolerance for fuss, and management / mapping of WAL/DB \
devices still seems like a lot of fuss especially when drives fail or have to be \
replaced for other reasons.
For RBD clusters/pools at least I really enjoy not having to mess with multiple \
devices; I'd rather run colo with SATA SSDs than spinners with NVMe WAL+DB.
- aad
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic