[prev in list] [next in list] [prev in thread] [next in thread] 

List:       fedora-list
Subject:    Re: Raid array empty after restart
From:       Bill Shirley <bill () ShirleyFamily ! net>
Date:       2020-05-27 12:48:31
Message-ID: 51979ec7-db1a-5d69-dc12-57c09b38bb28 () ShirleyFamily ! net
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


I've never done full disk RAID1.   Always, done it with partitions.

fdisk -l /dev/sda (and /dev/sdb) looks like this:
Disklabel type: gpt
Device                 Start               End       Sectors   Size Type
/dev/sda1             2048     97722367     97720320 46.6G Linux RAID
/dev/sda2     97722368     99809279       2086912 1019M Linux RAID
/dev/sda3     99809280   101040127       1230848   601M Linux RAID
/dev/sda4   101040128 3907028991 3805988864   1.8T Linux RAID


mdadm:
# -l       = level
# -n       = raid-devices
# -e       = metadata
# -b       = bitmap
mdadm -C /dev/md127 --homehost=myserver.example.com -n 2 -l 1 -e 1.2 -b internal \
/dev/sda1 /dev/sdb1

cat /proc/mdstat:
Personalities : [raid1]
md127 : active raid1 sdb2[1] sda2[0]
            1042432 blocks super 1.2 [2/2] [UU]
            bitmap: 0/1 pages [0KB], 65536KB chunk

mdadm --detail /dev/md127:
UUID : e00525a0:a3a5bfc8:ebe4587b:8489d910

mdadm.conf (use the UUID from mdadm --detail):
ARRAY /dev/md/boot level=raid1 num-devices=2 UUID=e00525a0:a3a5bfc8:ebe4587b:8489d910

You don't need to wait for the sync to complete.
format the partition:
$ mkfs.xfs -L BOOT /dev/md127

# get uuid
$ xfs_admin -u /dev/md127
UUID = 9385d42a-d661-494c-ba1d-b3cad4420b77

# get label
$ xfs_admin -l /dev/md127
label = "BOOT"

fstab (use the UUID from xfs admin -u):
# device mount                                     type       options dump       fsck
# point pgm         order
#LABEL=BOOT /dev/md127
UUID=9385d42a-d661-494c-ba1d-b3cad4420b77 /boot                                     \
xfs         defaults 0             0 Instead of UUID, can use /dev/md127

df:
Filesystem           Size   Used Avail Use% Mounted on
/dev/md125           1.8T   279G   1.5T   16% /
/dev/md127         1013M   317M   696M   32% /boot
/dev/md21             931G     40G   892G     5% /ssd
/dev/md124           600M   8.5M   592M     2% /boot/efi
/dev/md32             9.1T   1.5T   7.6T   17% /lan
/dev/md42             9.1T   399G   8.7T     5% /bacula


Hope this helps,
Bill

On 5/24/2020 7:37 AM, Patrick O'Callaghan wrote:
> Still getting the hang of md. I had it working for several days (2
> disks in RAID1 config) but after a system update and reboot, it
> suddenly shows no data:
> 
> ]# lsblk
> NAME                            MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
> [...]
> sdd                               8:48   0 931.5G  0 disk
> └─md127                           9:127  0 931.4G  0 raid1
> └─md127p1                     259:0    0 931.4G  0 part
> sde                               8:64   0 931.5G  0 disk
> └─md127                           9:127  0 931.4G  0 raid1
> └─md127p1                     259:0    0 931.4G  0 part
> 
> # mdadm --detail /dev/md127p1
> /dev/md127p1:
> Version : 1.2
> Creation Time : Wed May 20 16:34:58 2020
> Raid Level : raid1
> Array Size : 976628736 (931.39 GiB 1000.07 GB)
> Used Dev Size : 976630464 (931.39 GiB 1000.07 GB)
> Raid Devices : 2
> Total Devices : 2
> Persistence : Superblock is persistent
> 
> Intent Bitmap : Internal
> 
> Update Time : Sun May 24 12:29:54 2020
> State : clean
> Active Devices : 2
> Working Devices : 2
> Failed Devices : 0
> Spare Devices : 0
> 
> Consistency Policy : bitmap
> 
> Name : Bree:0  (local to host Bree)
> UUID : ba979f01:7f1dbe79:24f19f68:7ba6000c
> Events : 22436
> 
> Number   Major   Minor   RaidDevice State
> 0       8       48        0      active sync   /dev/sdd
> 1       8       64        1      active sync   /dev/sde
> # mount /dev/md127p1 /raid
> # ls /raid
> 
> How is this possible? The only thing that touches the array is a borg
> backup run from crontab, which I have verified is working correctly,
> including just before the update and reboot this morning. It looks as
> if the mount is mounting the wrong thing.
> 
> Or am I missing something very obvious?
> 
> poc
> _______________________________________________
> users mailing list -- users@lists.fedoraproject.org
> To unsubscribe send an email to users-leave@lists.fedoraproject.org
> Fedora Code of Conduct: \
> https://docs.fedoraproject.org/en-US/project/code-of-conduct/ List Guidelines: \
> https://fedoraproject.org/wiki/Mailing_list_guidelines List Archives: \
> https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org


[Attachment #5 (text/html)]

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p>I've never done full disk RAID1.   Always, done it with
      partitions.<br>
      <br>
      fdisk -l /dev/sda (and /dev/sdb) looks like this:<br>
      <font color="#993300"><tt>Disklabel type: gpt<br>
          Device                 Start               End       Sectors   Size \
                Type<br>
          /dev/sda1             2048     97722367     97720320 46.6G Linux RAID<br>
          /dev/sda2     97722368     99809279       2086912 1019M Linux RAID<br>
          /dev/sda3     99809280   101040127       1230848   601M Linux RAID<br>
          /dev/sda4   101040128 3907028991 3805988864   1.8T Linux RAID<br>
        </tt><tt></tt><tt><br>
        </tt></font><br>
      mdadm:<br>
      <font color="#993300"><tt># -l       = level</tt><tt><br>
        </tt><tt># -n       = raid-devices</tt><tt><br>
        </tt><tt># -e       = metadata</tt><tt><br>
        </tt><tt># -b       = bitmap   </tt><tt><br>
        </tt><tt>mdadm -C /dev/md127 --homehost=myserver.example.com -n
          2 -l 1 -e 1.2 -b internal /dev/sda1 /dev/sdb1</tt><tt><br>
        </tt></font><br>
      cat /proc/mdstat:<br>
      <font color="#993300"><tt>Personalities : [raid1] </tt><tt><br>
        </tt><tt>md127 : active raid1 sdb2[1] sda2[0]</tt><tt><br>
        </tt><tt>           1042432 blocks super 1.2 [2/2] [UU]</tt><tt><br>
        </tt><tt>           bitmap: 0/1 pages [0KB], 65536KB chunk</tt><tt><br>
        </tt></font><br>
      mdadm --detail /dev/md127:<br>
      <font color="#993300"><tt>UUID :
          e00525a0:a3a5bfc8:ebe4587b:8489d910</tt><tt><br>
        </tt></font><br>
      mdadm.conf (use the UUID from mdadm --detail):<br>
      <font color="#993300"><tt>ARRAY /dev/md/boot level=raid1
          num-devices=2 UUID=e00525a0:a3a5bfc8:ebe4587b:8489d910</tt><tt><br>
        </tt></font><br>
      You don't need to wait for the sync to complete.<br>
      format the partition:<br>
      <font color="#993300"><tt><font color="#000000">$</font> mkfs.xfs
          -L BOOT /dev/md127</tt><tt><br>
        </tt><tt><br>
          # get uuid</tt><tt><br>
        </tt><tt><font color="#000000">$</font> xfs_admin -u /dev/md127</tt><tt><br>
        </tt><tt>UUID = 9385d42a-d661-494c-ba1d-b3cad4420b77<br>
          <br>
          # get label</tt><tt><br>
        </tt><tt><font color="#000000">$</font> xfs_admin -l /dev/md127</tt><tt><br>
          label = "BOOT"<br>
        </tt></font><br>
      fstab (use the UUID from xfs admin -u):<br>
      <font color="#993300"><tt>#
          device                                                                      \
                
          mount                                     type       options                \
  dump       fsck</tt><tt><br>
        </tt><tt>#                                                                    \
                
          point                                                                       \
  pgm         order</tt><tt><br>
        </tt><tt>#LABEL=BOOT /dev/md127<br>
          UUID=9385d42a-d661-494c-ba1d-b3cad4420b77            
          /boot                                     xfs         defaults              \
  0             0<br>
        </tt></font>Instead of UUID, can use /dev/md127<br>
      <br>
      df:<br>
      <font color="#993300"><tt>Filesystem           Size   Used Avail Use%
          Mounted on</tt><tt><br>
        </tt><tt>/dev/md125           1.8T   279G   1.5T   16% /</tt><tt><br>
        </tt><tt>/dev/md127         1013M   317M   696M   32% /boot</tt><tt><br>
        </tt><tt>/dev/md21             931G     40G   892G     5% /ssd</tt><tt><br>
        </tt><tt>/dev/md124           600M   8.5M   592M     2% \
                /boot/efi</tt><tt><br>
        </tt><tt>/dev/md32             9.1T   1.5T   7.6T   17% /lan</tt><tt><br>
        </tt><tt>/dev/md42             9.1T   399G   8.7T     5% /bacula</tt><tt><br>
        </tt></font><br>
      <br>
      Hope this helps,<br>
      Bill<br>
      <br>
    </p>
    <div class="moz-cite-prefix">On 5/24/2020 7:37 AM, Patrick
      O'Callaghan wrote:<br>
    </div>
    <blockquote type="cite"
      cite="mid:94bacd6c90e72cd3a89c64c3c9eb013ff4a1a05f.camel@gmail.com">
      <pre class="moz-quote-pre" wrap="">Still getting the hang of md. I had it \
working for several days (2 disks in RAID1 config) but after a system update and \
reboot, it suddenly shows no data:

]# lsblk
NAME                            MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
[...]
sdd                               8:48   0 931.5G  0 disk  
└─md127                           9:127  0 931.4G  0 raid1 
  └─md127p1                     259:0    0 931.4G  0 part  
sde                               8:64   0 931.5G  0 disk  
└─md127                           9:127  0 931.4G  0 raid1 
  └─md127p1                     259:0    0 931.4G  0 part  

# mdadm --detail /dev/md127p1
/dev/md127p1:
           Version : 1.2
     Creation Time : Wed May 20 16:34:58 2020
        Raid Level : raid1
        Array Size : 976628736 (931.39 GiB 1000.07 GB)
     Used Dev Size : 976630464 (931.39 GiB 1000.07 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sun May 24 12:29:54 2020
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : Bree:0  (local to host Bree)
              UUID : ba979f01:7f1dbe79:24f19f68:7ba6000c
            Events : 22436

    Number   Major   Minor   RaidDevice State
       0       8       48        0      active sync   /dev/sdd
       1       8       64        1      active sync   /dev/sde
# mount /dev/md127p1 /raid
# ls /raid

How is this possible? The only thing that touches the array is a borg
backup run from crontab, which I have verified is working correctly,
including just before the update and reboot this morning. It looks as
if the mount is mounting the wrong thing.

Or am I missing something very obvious?

poc
_______________________________________________
users mailing list -- <a class="moz-txt-link-abbreviated" \
href="mailto:users@lists.fedoraproject.org">users@lists.fedoraproject.org</a> To \
unsubscribe send an email to <a class="moz-txt-link-abbreviated" \
href="mailto:users-leave@lists.fedoraproject.org">users-leave@lists.fedoraproject.org</a>
 Fedora Code of Conduct: <a class="moz-txt-link-freetext" \
href="https://docs.fedoraproject.org/en-US/project/code-of-conduct/">https://docs.fedoraproject.org/en-US/project/code-of-conduct/</a>
 List Guidelines: <a class="moz-txt-link-freetext" \
href="https://fedoraproject.org/wiki/Mailing_list_guidelines">https://fedoraproject.org/wiki/Mailing_list_guidelines</a>
 List Archives: <a class="moz-txt-link-freetext" \
href="https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org">https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org</a>
 </pre>
    </blockquote>
  </body>
</html>


[Attachment #6 (text/plain)]

_______________________________________________
users mailing list -- users@lists.fedoraproject.org
To unsubscribe send an email to users-leave@lists.fedoraproject.org
Fedora Code of Conduct: https://docs.fedoraproject.org/en-US/project/code-of-conduct/
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: https://lists.fedoraproject.org/archives/list/users@lists.fedoraproject.org


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic