[prev in list] [next in list] [prev in thread] [next in thread] 

List:       ceph-users
Subject:    [ceph-users] recovering damaged rbd volume
From:       mike brown <mike.brown1535 () outlook ! com>
Date:       2021-04-28 19:30:47
Message-ID: DB9P193MB11949CC4A8007FDA632B5F899C409 () DB9P193MB1194 ! EURP193 ! PROD ! OUTLOOK ! COM
[Download RAW message or body]

hello all,

I faced an incident for one of my very important rbd volumes with 5TB data, which is \
managed by OpenStack. I was about to increase the volume size live but I shrinked the \
volume unintentionally by running a wrong command of "virsh qemu-monitor-command". \
then I realized it and expand it again, but obviously I lost my data. can you help me \
or give me hints on how I can recover data of this rbd volume?

unfortunately, I don't have any backup for this part of data, and it's really \
important (I know I made a big mistake). also, I can't stop the cluster since it's \
under heavy production load.

It seems qemu has been using the old set of APIs that allows shrinking by default \
without any warnings even in the latest version. (All other standard ways of resizing \
volume, does not allow shrinking)

  *   Ceph: https://sourcegraph.com/github.com/ceph/ceph@luminous/-/blob/src/librbd/librbd.cc#L815
                
  *   Qemu: https://sourcegraph.com/github.com/qemu/qemu/-/blob/block/rbd.c#L832
  *   Libvirt: https://sourcegraph.com/github.com/libvirt/libvirt/-/blob/src/storage/storage_backend_rbd.c#L1280



_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-leave@ceph.io


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic