[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-lvm
Subject:    Re: [linux-lvm] Sorry to ask here ...
From:       Eric Ren <zren () suse ! com>
Date:       2017-03-02 5:44:13
Message-ID: e7242e4c-537d-a0b5-523c-cfa55d7170cb () suse ! com
[Download RAW message or body]

Hi,

On 02/24/2017 03:25 AM, Georges Giralt wrote:
> Hello !
>
> I'm sorry to ask for help here, but I'm lost in the middle of nowhere ...
>
> The context :
>
> A PC hardware with an UEFI "BIOS" set to legacy boot only.
>
> 3 disks with 2 partitions each. The 3 first partitions are set in software mirror (md0) 
> and used for /boot (Ext4) .The second 3 partitions are used as a software mirror and the 
> resulting md1 is the sole PV of a vg0 group onto which 8 logical volumes are set. This is 
> used to run Ubuntu 16.04.2 software (after many upgrades...).   *
>
> This setup has worked for years and seen changes in disk sizes, main board, etc...
>
> Lastly, I bet the cause is an upgrade of some packages, the machine refuses to boot. It 
> drops onto initramfs shell because the vg0 volume group is not activated. Launching lvm 
> and doing a vgchange -a y does the trick and the system boots fine afterwards.
>
> I've searched everything I can (but maybe the setup was so rock solid I became lazy or 
> unconscious) and tried to make again the initramfs with no luck.
>
> So every leads or advice would be greatly appreciated. Please, refrain to hit me hard on 
> the face. I'm already mourning....
>
> Many thanks in advance for your help !
>
>
> * : The 3 disks are here because at one time, long ago, I used mirroring in the LVM 
> software. ... I switched to software raid and kept the 3 disks as I had them into the box.
Looks very similar with these bugs:

https://bugzilla.opensuse.org/show_bug.cgi?id=964862
https://bugzilla.opensuse.org/show_bug.cgi?id=919284
https://bugzilla.opensuse.org/show_bug.cgi?id=1019886

You might be interested in having a try with this fix (attached):
"simplify-special-case-for-md-in-69-dm-lvm-metadata.patch"

This patch was originally discussed here - https://www.spinics.net/lists/raid/msg55182.html

But for some reason, it is not accepted by upstream:)

Hope it can help.

Eric

>
>


["0001-Simplify-special-case-for-md-in-69-dm-lvm-metadata.r.patch" (text/x-patch)]

> From 0913b597d61b9b430654d7ab06528cdfcfaf06f4 Mon Sep 17 00:00:00 2001
From: NeilBrown <neilb@suse.com>
Date: Wed, 4 Jan 2017 14:20:53 +1100
Subject: [PATCH] Simplify special-case for md in 69-dm-lvm-metadata.rules

This special casing brings little value.  It appears to attempt to
determine if the array is active yet or not, and to skip
processing if the array has not yet been started.
However, if the array hasn't been started, then "blkid" will
not have been able to read a signature, so:
  ENV{ID_FS_TYPE}!="LVM2_member|LVM1_member", GOTO="lvm_end"
will have caused all this code to be skipped.

Further, this code causes incorrect behaviour in at least one case.
It assumes that the first "add" event should be ignored, as it will be
followed by a "change" event which indicates the array coming on line.
This is consistent with how the kernel sends events, but not always
consistent with how this script sees event.
Specifically: if the initrd has "mdadm" support installed, but not
"lvm2" support, then the initial "add" and "change" events will
happen while the initrd is in charge and this file is not available.
Once the root filesystem is mountd, this file will be available
and "udevadm trigger --action=add" will be run.
So the first and only event seen by this script for an md device will be
"add", and it will incorrectly ignore it.

So replace the special handling with code that simply jumps to lvm_scan
on any 'add' or 'change' event.

Signed-off-by: NeilBrown <neilb@suse.com>
---
 udev/69-dm-lvm-metad.rules.in | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/udev/69-dm-lvm-metad.rules.in b/udev/69-dm-lvm-metad.rules.in
index bd75fc8efcd5..fcbb7f755eba 100644
--- a/udev/69-dm-lvm-metad.rules.in
+++ b/udev/69-dm-lvm-metad.rules.in
@@ -51,13 +51,11 @@ ENV{DM_UDEV_PRIMARY_SOURCE_FLAG}=="1", ENV{DM_ACTIVATION}=="1", \
GOTO="lvm_scan"  GOTO="lvm_end"
 
 # MD device:
+# Need to scan on both 'add' and 'change'
 LABEL="next"
 KERNEL!="md[0-9]*", GOTO="next"
-IMPORT{db}="LVM_MD_PV_ACTIVATED"
-ACTION=="add", ENV{LVM_MD_PV_ACTIVATED}=="1", GOTO="lvm_scan"
-ACTION=="change", ENV{LVM_MD_PV_ACTIVATED}!="1", TEST=="md/array_state", \
                ENV{LVM_MD_PV_ACTIVATED}="1", GOTO="lvm_scan"
-ACTION=="add", KERNEL=="md[0-9]*p[0-9]*", GOTO="lvm_scan"
-ENV{LVM_MD_PV_ACTIVATED}!="1", ENV{SYSTEMD_READY}="0"
+ACTION=="add", GOTO="lvm_scan"
+ACTION=="change", GOTO="lvm_scan"
 GOTO="lvm_end"
 
 # Loop device:
-- 
2.11.0



_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic