[prev in list] [next in list] [prev in thread] [next in thread] 

List:       ceph-users
Subject:    [ceph-users] Re: New Ceph cluster in PRODUCTION
From:       Michel Niyoyita <micou12 () gmail ! com>
Date:       2021-09-30 6:45:53
Message-ID: CA+drX3FeShOi_QLFmZY5FhOT+OOZub-WWaw6dF=S-4Ve8PZV3A () mail ! gmail ! com
[Download RAW message or body]

Hello Eugen

We planned to start with a small cluster with HDD disks and a Replicas of 2
. it will be consist of 6 hosts , 3 of them are MONS which will hold also 2
MGRs and remaining 3 are for OSDs .
and will use VM for deployment . one of the OSD host will be used also as
RGW client for Object storage, and failure domain will be based on hosts.

your inputs are highly appreciated.

Michel

On Thu, Sep 30, 2021 at 8:30 AM Eugen Block <eblock@nde.ag> wrote:

> Hi,
>
> there is no information about your ceph cluster, e. g. hdd/ssd/nvme
> disks. This information can be crucial with regards to performance.
> Also why would you use
>
> > osd_pool_default_min_size = 1
> > osd_pool_default_size = 2
>
> There have been endless discussions in this list why a pool size of 2
> is a bad idea. I would recommend to either remove these lines or set
> min_size = 2 and size = 3, those are reasonable values for replicated
> pools.
> If you shared your 'ceph osd tree' and your rulesets (and profiles if
> you intend to use EC) it would help getting a better overview of your
> cluster.
>
>
> Zitat von Michel Niyoyita <micou12@gmail.com>:
>
> > Hello Team
> >
> > I am new in CEPH . I am going to deploy ceph in production at my first
> time
> > which will be integrated with openstack . below is my ceph.conf
> > configurations and my ansible inventory set up.
> >
> > Please if I miss something important please let me know and advise on
> > changes I have to make. I deployed ceph pacific using Ansible and Ubuntu
> > 20.04
> >
> > for now the cluster is working.
> >
> > Best Regards.
> >
> > Michel
> >
> > [client]
> > rbd_default_features = 1
> >
> > [client.libvirt]
> > admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok # must
> be
> > writable by QEMU and allowed by SELinux or AppArmor
> > log file = /var/log/ceph/qemu-guest-$pid.log # must be writable by QEMU
> and
> > allowed by SELinux or AppArmor
> >
> > [client.rgw.ceph-osd33333]
> > rgw_dns_name = ceph-osd33333
> >
> > [client.rgw.ceph-osd333333.rgw0]
> > host = ceph-osd33333
> > keyring = /var/lib/ceph/radosgw/ceph-rgw.cepnh-osd33333.rgw0/keyring
> > log file = /var/log/ceph/ceph-rgw-ceph-osd3.rgw0.log
> > rgw frontends = beast endpoint=10.10.10.10:8080
> > rgw thread pool size = 512
> > rgw_dns_name = ceph-osd33333
> > rgw_frontends = "beast port=8080"
> > rgw_enable_usage_log = true
> > rgw_thread_pool_size = 512
> > rgw_keystone_api_version = 3
> > rgw_keystone_url = http://kolla:5000
> > rgw_keystone_admin_user = admin
> > rgw_keystone_admin_password = xxxxxxxxxxxxxxx
> > rgw_keystone_admin_domain = default
> > rgw_keystone_admin_project = admin
> > rgw_keystone_accepted_roles = admin,Member,_member_,member
> > rgw_keystone_verify_ssl = false
> > rgw_s3_auth_use_keystone = true
> >
> >
> >
> >
> >
> > [global]
> > auth_client_required = cephx
> > auth_cluster_required = cephx
> > auth_service_required = cephx
> > cluster network = 10.10.29.0/24
> > fsid = e4877c82-84f2-439a-9f43-34f2b1b6678a
> > mon host = [v2:10.10.10.10:3300,v1:10.10.10.10:6789],[v2:
> 10.10.10.10:3300
> > ,v1:10.10.10.10:6789],[v2:10.10.10.10:3300,v1:10.10.10.10:6789]
> > mon_allow_pool_delete = True
> > mon_clock_drift_allowed = 0.5
> > mon_max_pg_per_osd = 400
> > mon_osd_allow_primary_affinity = 1
> > mon_pg_warn_max_object_skew = 0
> > mon_pg_warn_max_per_osd = 0
> > mon_pg_warn_min_per_osd = 0
> > osd_pool_default_min_size = 1
> > osd_pool_default_size = 2
> > public network = 0.0.0.0/0
> >
> >
> > [mons]
> > ceph-mon11111
> > ceph-mon22222
> > ceph-mon33333
> >
> > [osds]
> > ceph-osd11111
> > ceph-osd22222
> > ceph-osd33333
> >
> > [mgrs]
> > ceph-mon11111
> > ceph-mon22222
> >
> > [grafana-server]
> > ceph-mon11111
> >
> > [clients]
> > ceph-osd33333
> >
> > [rgws]
> > ceph-osd33333
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-leave@ceph.io
>
>
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-leave@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic