[prev in list] [next in list] [prev in thread] [next in thread]
List: ceph-ansible
Subject: [Ceph-ansible] Recovering from a Ceph OSD journal disk failure
From: asher256 () gmail ! com (Asher256)
Date: 2017-12-23 1:02:58
Message-ID: CAPAk-pP4apDwoa8YJTCenQipAACPf8UOKJJyQypcuChODQiKTg () mail ! gmail ! com
[Download RAW message or body]
Hello,
The Git branch we use is the 'master' branch (last commit Wed Dec 20
15:29:02 2017 +0100 / 0b55abe3d0fc6db6c93d963545781c05a31503bb).
This is our ceph-ansible configuration 'osds.yml':
```
filestore: bluestore
osd_scenario: lvm
lvm_volumes:
- data: /dev/vdb
wal: wal_vdb
wal_vg: ceph_osd_journals
db: db_vdb
db_vg: ceph_osd_journals
- data: /dev/vdc
wal: wal_vdc
wal_vg: ceph_osd_journals
db: db_vdc
db_vg: ceph_osd_journals
```
In order to speed up our Ceph OSDs, we decided to store the Ceph OSD
journals in an external disk (a fast PCIe SSD: the volume group
"ceph_osd_journals" above).
Questions:
1. Is the configuration above (osds.yml) the right way to configure Ceph
OSDs with ceph-volume? (the LVM volume group 'ceph_osd_journals' wasn't
created by ceph-ansible playbooks. We created the LVM volume group
'ceph_osd_journals' + the logical volumes wal_*/db_* with an Ansible
playbook that was executed before ceph-ansible playbooks to prepare the
disks).
2. In the case where the Ceph OSD journal disk fails, how can we recover
the OSDs (once the OSD journal disk is replaced with a new one)?
Thank you in advance.
Regards,
Asher256
https://github.com/Asher256
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-ansible-ceph.com/attachments/20171222/9f0136df/attachment.html>
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic