[prev in list] [next in list] [prev in thread] [next in thread] 

List:       ceph-users
Subject:    [ceph-users] Re: Erasure coding RBD pool for OpenStack
From:       Lazuardi Nasution <mrxlazuardin () gmail ! com>
Date:       2020-08-31 8:55:30
Message-ID: CA+u3GuJEx_1oqOH__HuFe2Y=nyp2rG8k_dthzRTy-1p2YHSZsg () mail ! gmail ! com
[Download RAW message or body]

Hi Max,

So, it seems that you prefer to use image cache than allowing cross access
between Ceph users. By that, all communications are APi based, the snapshot
and CoW happen inside the same pool for a single Ceph client only, isn't
it? I'll consider this way and compare with the cross pool access way.
Thank you for your guidance.

Best regards,

On Mon, Aug 31, 2020 at 1:53 PM Max Krasilnikov <pseudo@avalon.org.ua>
wrote:

> Hello!
>
>  Mon, Aug 31, 2020 at 01:06:13AM +0700, mrxlazuardin wrote:
>
> > Hi Max,
> >
> > As far as I know, cross access of Ceph pools is needed for copy on write
> > feature which enables fast cloning/snapshotting. For example, nova and
> > cinder users need to read to images pool to do copy on write from such an
> > image. So, it seems that Ceph policy from the previous URL can be
> modified
> > to be like the following.
> >
> >
> > *ceph auth get-or-create client.nova mon 'profile rbd' osd 'profile rbd
> > pool=vms, profile rbd-read-only pool=images' mgr 'profile rbd
> pool=vmsceph
> > auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd
> > pool=volumes, profile rbd-read-only pool=images' mgr 'profile rbd
> > pool=volumes  *
> >
> > Since it is just read access, I think this will not be a matter. I hope
> you
> > are right that cross writing is API based. What do you think?
>
> I've use image volume cache as volumes from images is commonly used:
>
> https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cache.html
> Cinder-backup, AFAIR, uses snapshots too.
>
> > On Sun, Aug 30, 2020 at 2:05 AM Max Krasilnikov <pseudo@avalon.org.ua>
> > wrote:
> >
> > > День добрий!
> > >
> > >  Sat, Aug 29, 2020 at 10:19:12PM +0700, mrxlazuardin wrote:
> > >
> > > > Hi Max,
> > > >
> > > > I see, it is very helpful and inspired, thank you for that. I assume
> that
> > > > you use the same way for Nova ephemeral (nova user to vms pool).
> > >
> > > As for now i dont' use any non-cinder volumes in cluster.
> > >
> > > > How do you put the policy for cross pools access between them? I mean
> > > nova
> > > > user to images and volumes pools, cinder user to images, vms and
> backups
> > > > pools, and of course cinder-backup user to volumes pool. I think each
> > > user
> > > > will need that cross pools access and will not be a problem on
> reading
> > > > since EC data pool has been defined per RBD image on creation. But,
> how
> > > > about writing, do you think that there will be no cross pools
> writing?
> > >
> > > You have to notice: all OpenStack services are interacting with each
> other
> > > using
> > > api calls and message queues, not accessing data, databases and files
> > > directly.
> > > Any of them may be deployed as standalone. The only glue for them is
> > > Keystone.
> > >
> > > > On Sat, Aug 29, 2020 at 2:21 PM Max Krasilnikov <
> pseudo@avalon.org.ua>
> > > > wrote:
> > > >
> > > > > Hello!
> > > > >
> > > > >  Fri, Aug 28, 2020 at 09:18:05PM +0700, mrxlazuardin wrote:
> > > > >
> > > > > > Hi Max,
> > > > > >
> > > > > > Would you mind to share some config examples? What happen if we
> > > create
> > > > > the
> > > > > > instance which boot with newly created or existing volume?
> > > > >
> > > > > In cinder.conf:
> > > > >
> > > > > [ceph]
> > > > > volume_driver = cinder.volume.drivers.rbd.RBDDriver
> > > > > volume_backend_name = ceph
> > > > > rbd_pool = volumes
> > > > > rbd_user = cinder
> > > > > rbd_secret_uuid = {{ rbd_uid }}
> > > > > ....
> > > > >
> > > > > [ceph-private]
> > > > > volume_driver = cinder.volume.drivers.rbd.RBDDriver
> > > > > volume_backend_name = ceph-private
> > > > > rbd_pool = volumes-private-meta
> > > > > rbd_user = cinder-private
> > > > > rbd_secret_uuid = {{ rbd_uid_private }}
> > > > > ....
> > > > >
> > > > > /etc/ceph/ceph.conf:
> > > > >
> > > > > [client.cinder-private]
> > > > > rbd_default_data_pool = volumes-private
> > > > >
> > > > > openstack volume type show private
> > > > > ...
> > > > > | properties         | volume_backend_name='ceph-private'   |
> > > > > ...
> > > > >
> > > > > Erasure pool with metadata pool created as described here:
> > > > >
> > > > >
> > >
> https://docs.ceph.com/docs/master/rados/operations/erasure-code/#erasure-coding-with-overwrites
> > > > > So, data pool is volumes-private, metadata pool is replicated pool
> > > named
> > > > > volumes-private-meta.
> > > > >
> > > > > Instances is running well with this config. All my instances is
> booting
> > > > > from
> > > > > volumes, even with volumes of type 'private'.
> > > > >
> > > > > Metadata pool is quite small, it is 1.8 MiB used while data pool is
> > > 279 GiB
> > > > > used. Your particular sizes may differ, but not too much.
> > > > >
> > > > > > Best regards,
> > > > > >
> > > > > >
> > > > > > On Fri, Aug 28, 2020 at 5:27 PM Max Krasilnikov <
> > > pseudo@avalon.org.ua>
> > > > > > wrote:
> > > > > >
> > > > > > > Hello!
> > > > > > >
> > > > > > >  Fri, Aug 28, 2020 at 04:05:55PM +0700, mrxlazuardin wrote:
> > > > > > >
> > > > > > > > Hi Konstantin,
> > > > > > > >
> > > > > > > > I hope you or anybody still follows this old thread.
> > > > > > > >
> > > > > > > > Can this EC data pool be configured per pool, not per
> client? If
> > > we
> > > > > > > follow
> > > > > > > > https://docs.ceph.com/docs/master/rbd/rbd-openstack/ we may
> see
> > > that
> > > > > > > cinder
> > > > > > > > client will access vms and volumes pools, both with read and
> > > write
> > > > > > > > permission. How can we handle this?
> > > > > > > >
> > > > > > > > If we config with different clients for nova (vms) and cinder
> > > > > (volumes),
> > > > > > > I
> > > > > > > > think there will be a problem if there is cross pool access,
> > > > > especially
> > > > > > > on
> > > > > > > > write. Let's say that client nova will create volume on
> instance
> > > > > creation
> > > > > > > > for booting from that volume. Any thoughts?
> > > > > > >
> > > > > > > As of this docs, nova will use pools as client.cinder user.
> When
> > > using
> > > > > > > replicated + erasure pools with cinder I have created two
> different
> > > > > users
> > > > > > > for
> > > > > > > them and two different backends in in cinder.conf for the same
> > > cluster
> > > > > with
> > > > > > > different credentials as rbd_default_data_pool may be set only
> > > > > per-user in
> > > > > > > ceph.conf. So it was 2 different rdb uids installed in libvirt
> and
> > > 2
> > > > > > > different
> > > > > > > volume types in cinder.
> > > > > > >
> > > > > > > As I understand, you need something like my setup.
> > > > > > >
> > > > > > > >
> > > > > > > > Best regards,
> > > > > > > >
> > > > > > > >
> > > > > > > > > Date: Wed, 11 Jul 2018 11:16:27 +0700
> > > > > > > > > From: Konstantin Shalygin <k0ste@k0ste.ru>
> > > > > > > > > To: ceph-users@lists.ceph.com
> > > > > > > > > Subject: Re: [ceph-users] Erasure coding RBD pool for
> OpenStack
> > > > > > > > >         Glance, Nova and Cinder
> > > > > > > > > Message-ID: <069ac368-22b0-3d18-937b-70ce39287cb1@k0ste.ru
> >
> > > > > > > > > Content-Type: text/plain; charset=utf-8; format=flowed
> > > > > > > > >
> > > > > > > > > > So if you want, two more questions to you :
> > > > > > > > > >
> > > > > > > > > > - How do you handle your ceph.conf configuration (default
> > > data
> > > > > pool
> > > > > > > by
> > > > > > > > > > user) / distribution ? Manually, config management,
> > > > > > > openstack-ansible...
> > > > > > > > > > ?
> > > > > > > > > > - Did you made comparisons, benchmarks between replicated
> > > pools
> > > > > and
> > > > > > > EC
> > > > > > > > > > pools, on the same hardware / drives ? I read that small
> > > writes
> > > > > are
> > > > > > > not
> > > > > > > > > > very performant with EC.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > ceph.conf with default data pool is only need for Cinder at
> > > image
> > > > > > > > > creation time, after this luminous+ rbd client will be
> found
> > > > > feature
> > > > > > > > > "data-pool" and will perform data-io to this pool.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > > # rbd info
> > > > > > > > > >
> erasure_rbd_meta/volume-09ed44bf-7d16-453a-b712-a636a0d3d812
> > > > > <-----
> > > > > > > > > > meta pool !
> > > > > > > > > > rbd image 'volume-09ed44bf-7d16-453a-b712-a636a0d3d812':
> > > > > > > > > > ??????? size 1500 GB in 384000 objects
> > > > > > > > > > ??????? order 22 (4096 kB objects)
> > > > > > > > > > ??????? data_pool: erasure_rbd_data??????? <----- our
> data
> > > pool
> > > > > > > > > > ??????? block_name_prefix: rbd_data.6.a2720a1ec432bf
> > > > > > > > > > ??????? format: 2
> > > > > > > > > > ??????? features: layering, exclusive-lock, object-map,
> > > > > fast-diff,
> > > > > > > > > > deep-flatten, data-pool????????? <----- "data-pool"
> feature
> > > > > > > > > > ??????? flags:
> > > > > > > > > > ??????? create_timestamp: Sat Jan 27 20:24:04 2018
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > k
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > ------------------------------
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic