[prev in list] [next in list] [prev in thread] [next in thread] 

List:       evms-devel
Subject:    Re: [Evms-devel] Problems with Raid5 after upgrading to 2.4.0
From:       Mike Tran <mhtran () us ! ibm ! com>
Date:       2004-10-18 18:44:38
Message-ID: 1098125078.5234.23.camel () MIKETRAN ! austin ! ibm ! com
[Download RAW message or body]

On Mon, 2004-10-18 at 13:08, Dominik Westner wrote:
> Hi Mike,
> 
> thanks for pointing me to mdadm. This really helped me to reassemble 
> the raid.
> It's currently  running degraded, but recovering the 3rd disk.
> I tried to assemble the raid from my 3 devices but this did not work 
> with mdadm. When I just used 2 of them it worked.

You will have to wait for the resync to complete (ie. /proc/mdstat).

> 
> 
> I still don't know how this could happen. I made sure that the resync 
> finished before rebooting. (Actually there was a weekend between).
> 
> My raid5 setup contains 3 raid devices:
> /dev/hda4
> /dev/hde
> /dev/hdg

My guess is that the you had an I/O error on /dev/hdg and it was kicked
by the kernel md driver (1 faulty disk).  When you re-added the same
disk back to the array, it got kicked out again.  That's why you ended
up with 2 faulty disks.


> 
> Here are the details:
> 
> loki ~ # mdadm --detail /dev/md1
> /dev/md1:
>          Version : 00.90.01
>    Creation Time : Sat Mar  8 16:39:42 2003
>       Raid Level : raid5
>       Array Size : 78164992 (74.54 GiB 80.04 GB)
>      Device Size : 39082496 (37.27 GiB 40.02 GB)
>     Raid Devices : 3
>    Total Devices : 3
> Preferred Minor : 1
>      Persistence : Superblock is persistent
> 
>      Update Time : Mon Oct 18 20:47:51 2004
>            State : clean, degraded, recovering
>   Active Devices : 2
> Working Devices : 3
>   Failed Devices : 0
>    Spare Devices : 1
> 
>           Layout : left-symmetric
>       Chunk Size : 256K
> 
>   Rebuild Status : 1% complete
> 
>             UUID : 84695c0c:bb0c5b89:35b23d2d:105763b6
>           Events : 0.6656504
> 
>      Number   Major   Minor   RaidDevice State
>         0      33        0        0      active sync   /dev/hde
>         1       0        0        -      removed
>         2     253        3        2      active sync   
> /dev/evms/.nodes/hda4
> 
>         3      34        0        1      spare rebuilding   /dev/hdg
> 
> 
> Thanks again. Without your help I'd been lost, I never used anything 
> else than evms for my raid configuration.

OK. The raid5 array is resync-ing.  Check /proc/mdstat for /dev/md1
again to make sure that everything is ok.

> 
> I have one final question though. Can I use mdadm in parallel to evms? 
> (e.g. for monitoring).

You can use mdadm in conjunction with EVMS.  mdadm is command driven
user interface, thus you have to know what you are doing.  I have used
mdadm many times to re-create MD arrays, especially for raid5 array. 
When I use mdadm, I know exactly how many disks, chunk size, order of
the disks, missing if any, etc...

--
Regards,
Mike T.




-------------------------------------------------------
This SF.net email is sponsored by: IT Product Guide on ITManagersJournal
Use IT products in your business? Tell us what you think of them. Give us
Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more
http://productguide.itmanagersjournal.com/guidepromo.tmpl
_______________________________________________
Evms-devel mailing list
Evms-devel@lists.sourceforge.net
To subscribe/unsubscribe, please visit:
https://lists.sourceforge.net/lists/listinfo/evms-devel
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic