[prev in list] [next in list] [prev in thread] [next in thread]
List: ceph-ansible
Subject: [Ceph-ansible] still problems deploying CephFS
From: Clausen,_Jörn <jclausen () geomar ! de>
Date: 2019-04-03 9:37:34
Message-ID: 804b5578-4653-172d-5503-d36626aec206 () geomar ! de
[Download RAW message or body]
[Attachment #2 (multipart/signed)]
Hi!
As issue #3606 is still a thing (deploying MDSs, but not creating a
CephFS by default), I wanted to let ceph-ansible have its way and create
a CephFS. In group_vars/all.yml I have
cephfs: data
cephfs_data: pool-fs-data
cephfs_metadata: pool-fs-metadata
The pools are created, but the playbook bails out with
failed: [cephtmds01 -> 172.17.0.35] (item=pool-fs-data) => changed=false
cmd:
- ceph
- --cluster
- ceph
- osd
- pool
- application
- enable
- pool-fs-data
- data
delta: '0:00:00.561893'
end: '2019-04-03 11:26:09.614799'
item: pool-fs-data
msg: non-zero return code
rc: 1
start: '2019-04-03 11:26:09.052906'
stderr: 'Error EPERM: Are you SURE? Pool ''pool-fs-data'' already has
an enabled application; pass --yes-i-really-mean-it to proceed anyway'
stderr_lines:
- 'Error EPERM: Are you SURE? Pool ''pool-fs-data'' already has an
enabled application; pass --yes-i-really-mean-it to proceed anyway'
stdout: ''
stdout_lines: <omitted>
failed: [cephtmds01 -> 172.17.0.35] (item=pool-fs-metadata) => changed=false
cmd:
- ceph
- --cluster
- ceph
- osd
- pool
- application
- enable
- pool-fs-metadata
- data
delta: '0:00:01.217422'
end: '2019-04-03 11:26:11.072246'
item: pool-fs-metadata
msg: non-zero return code
rc: 1
start: '2019-04-03 11:26:09.854824'
stderr: 'Error EPERM: Are you SURE? Pool ''pool-fs-metadata'' already
has an enabled application; pass --yes-i-really-mean-it to proceed anyway'
stderr_lines:
- 'Error EPERM: Are you SURE? Pool ''pool-fs-metadata'' already has
an enabled application; pass --yes-i-really-mean-it to proceed anyway'
stdout: ''
stdout_lines: <omitted>
The pools are created, and indeed have the correct application associated:
$ ceph osd pool ls detail
pool 2 'pool-fs-data' replicated size 3 min_size 2 crush_rule 0
object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change
234 flags hashpspool stripe_width 0 application cephfs
pool 3 'pool-fs-metadata' replicated size 3 min_size 2 crush_rule 0
object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode warn last_change
234 flags hashpspool stripe_width 0 application cephfs
This is ceph-ansible stable-4.0, deploying Nautilus on CentOS 7.
Did I miss something, or is this worth another bug report?
--
Jörn Clausen
Daten- und Rechenzentrum
GEOMAR Helmholtz-Zentrum für Ozeanforschung Kiel
Düsternbrookerweg 20
24105 Kiel
["smime.p7s" (application/pkcs7-signature)]
_______________________________________________
Ceph-ansible mailing list
Ceph-ansible@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-ansible-ceph.com
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic