[prev in list] [next in list] [prev in thread] [next in thread]
List: drbd-user
Subject: Re: [DRBD-user] DRBD resource expansion trouble
From: Peter Brunnengraeber <pbrunnen () bccglobal ! com>
Date: 2016-09-05 23:28:00
Message-ID: 1280605284.316.1473118077969.JavaMail.pbrunnen () Station8 ! local
[Download RAW message or body]
Hello all,
Thought I would provide an update as someone may find this useful.
So I was able to resolve my situation. Here is what I did:
- Changed drbd config to use the raw [/dev/sdc] device on the secondary node
- Wiped out /dev/sdc on the secondary node [dd if=/dev/zero of=/dev/sdc bs=1M \
count=10]
- Recreated the internal metadata [LVM_SYSTEM_DIR= drbdadm create-md vmkfs_vol1]
- Brought up the resource and did a full sync
- Failed-over the cluster and repeated the above steps
With kind regards,
-Peter Brunnengräber
----- Original Message -----
From: "Peter Brunnengraeber" <pbrunnen@bccglobal.com>
To: drbd-user@lists.linbit.com
Sent: Sunday, August 21, 2016 3:40:47 PM
Subject: DRBD resource expansion trouble
Hello,
I have been having an issue attempting to expand a DRBD storage resource.
Our system is hardware raid (lsi megaraid) -> drbd (internal metadata) -> lvm storage
The storage software we use requires control of LVM and we backed it with DRBD, thus \
the strange setup with LVM ontop of DRBD.
- We added disks and extended the raid storage backend
- Issued 'echo 1 >/sys/class/scsi_device/1\:2\:2\:0/device/rescan'
- Kernel and 'parted /dev/sdc print free' sees the new size and free space
We attempted to do an online expansion based on the notes in a previous posting:
Based on> http://lists.linbit.com/pipermail/drbd-user/2014-February/020663.html
- Attempted 'drbdmeta /dev/drbd1 v08 /dev/sdc1 internal check-resize' but that did \
not do anything that we could tell
- 'pvdisplay /dev/sdc' still shows the old size; 'parted /dev/sdc print free' also \
still shows the old size and free space
- Attempted 'drbdadm -v resize vmkfs_vol1', also does nothing. No error, but no \
additional space added to drbd
So we attempted to fall back to the drbd documented offline way:
Based on> https://www.drbd.org/en/doc/users-guide-83/s-resizing
- Took down the drbd resource and dumped the metadata: 'drbdadm dump-md vmkfs_vol1 \
>/tmp/metadata'
- Edited /tmp/metadata to change "la-size-sect = DevSizeInSect - ( (DevSizeInSect / \
32768 * 8) + 72 )"
- Used 'LVM_SYSTEM_DIR= drbdadm create-md vmkfs_vol1' to create the new metadata and \
circumventing the LVM detection problem for the LVM contained within the DRBD \
resource.
- Reimported the metadata to the drbd resource
- Attempted to bring up the drbd resouce, but errors "Low.dev. smaller than requested \
DRBD-dev. size."
Looking at parted's output, we see the free space still. In parted I tried to:
- Used sector mode
- Resize, but fails with "unknown filesystem type"
- Remove and recreate fails with "closest location we can manage" message which only \
provided the option to recreate it up to a few thousand sectors after the original \
sector size.
At this point, I assume that parted won't let me create the larger partition because \
it see something there like the drbd metadata... I guess this is a parted \
bug/limitation more than anything. The disk is GPT table because of the size, so \
fdisk was off the table.
As a last resort, I assume that if I was to low level wipe the data from the RAID on \
one node that I could recreate the partition and metadata from scratch. I believe \
that I can do a full resync from a smaller to larger resource, failover to the system \
with the expanded storage, and then wipe/resync/repeat back to the other node.
Does this last resort make sense and/or has anyone else run across this or have a way \
to move forward? Also, can I just back DRBD with the device directly and no partition \
(/dev/sdc instead of /dev/sdc1)?
I'm on drbd v8.3.11
Very much appreciated!
-With kind regards,
Peter Brunnengräber
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic