[prev in list] [next in list] [prev in thread] [next in thread] 

List:       ext3-users
Subject:    Re: Corrupt inodes on shared disk...
From:       "Paul Fitzmaurice" <pfitzmaurice () aveksa ! com>
Date:       2007-04-04 3:34:17
Message-ID: 8D5E04C06376C240844AA8FF7C81CBAD037536 () exchange ! aveksa ! local
[Download RAW message or body]

--===============1949785268==
Content-class: urn:content-classes:message
Content-Type: multipart/alternative;
	boundary="----_=_NextPart_001_01C7766A.1FFD77FC"

This is a multi-part message in MIME format.

[Attachment #2 (text/plain)]

Thanks for the info, if you could help to confirm, it appears that in some fail-over \
situations, we are mounting the shared partition as the the node going down has not \
completely shut down and done the umount!

So having one node in rw mode when shutting down, and one node mounting and starting \
up... Could this cause inode and journal corruption?


----- Original Message -----
From: Stephen Samuel <darkonc@gmail.com>
To: Paul Fitzmaurice
Cc: ext3-users@redhat.com <ext3-users@redhat.com>
Sent: Tue Apr 03 15:40:03 2007
Subject: Re: Corrupt inodes on shared disk...

I don't know much about RHCS, but I'm think that this is more likely
to be a Red Hat problem than an ext3 problem..

1) *IF* RHCS properly locks out the 'dead' system, and it doesn't
manage (at some time after the backup system takes over) to write
cashes to the shared drive,

2) and *IF* the failover software isn't too stupid to do things like
run the journal, and otherwise do sane FSCK things before mounting,
then you shouldn't have a problem.

My best guess is that 2) is relatively unlikely which leaves 1) as
probable cause.

If your primary system does *ANY* writes after the failover starts,
then you can probably expect problems like you've seen here. (does
RHCS _physically_ lock out the second system, or is it a software
lockout?)

The other question I have is: why is the system failing over?  Other
than testing, a well built HA system should almost *never* actually
need to fail over. (we're not talking Windows servers here :-} )  HA
should be like insurance ... You pay up front for it and work to make
sure that you never actually have to use what you've paid for.


On 4/3/07, Paul Fitzmaurice <pfitzmaurice@aveksa.com> wrote:
> I am having problems when using a Dell PowerVault MD3000 with multipath from
> a Dell PowerEdge 1950.  I have 2 cables connected and mount the partition on
> the DAS Array.  I am using RHEL 4.4 with RHCS and a two node cluster.  Only
> one node is "Active" at a time, it creates a mount to the partition, and if
> there is an issue RHCS will fence the device and then the other node will
> mount the partition.
> 
> I have now run into a problem twice where my ext3 (with Journaling) has
> corrupt inodes.  This actually has resulted in a filesystem with #xxxxxxxxx
> files and directories.


[Attachment #3 (text/html)]

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<HTML>
<HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=utf-8">
<META NAME="Generator" CONTENT="MS Exchange Server version 6.5.7638.1">
<TITLE>Re: Corrupt inodes on shared disk...</TITLE>
</HEAD>
<BODY>
<!-- Converted from text/plain format -->

<P><FONT SIZE=2>Thanks for the info, if you could help to confirm, it appears that in \
some fail-over situations, we are mounting the shared partition as the the node going \
down has not completely shut down and done the umount!<BR> <BR>
So having one node in rw mode when shutting down, and one node mounting and starting \
up... Could this cause inode and journal corruption?<BR> <BR>
<BR>
----- Original Message -----<BR>
From: Stephen Samuel &lt;darkonc@gmail.com&gt;<BR>
To: Paul Fitzmaurice<BR>
Cc: ext3-users@redhat.com &lt;ext3-users@redhat.com&gt;<BR>
Sent: Tue Apr 03 15:40:03 2007<BR>
Subject: Re: Corrupt inodes on shared disk...<BR>
<BR>
I don't know much about RHCS, but I'm think that this is more likely<BR>
to be a Red Hat problem than an ext3 problem..<BR>
<BR>
1) *IF* RHCS properly locks out the 'dead' system, and it doesn't<BR>
manage (at some time after the backup system takes over) to write<BR>
cashes to the shared drive,<BR>
<BR>
2) and *IF* the failover software isn't too stupid to do things like<BR>
run the journal, and otherwise do sane FSCK things before mounting,<BR>
then you shouldn't have a problem.<BR>
<BR>
My best guess is that 2) is relatively unlikely which leaves 1) as<BR>
probable cause.<BR>
<BR>
If your primary system does *ANY* writes after the failover starts,<BR>
then you can probably expect problems like you've seen here. (does<BR>
RHCS _physically_ lock out the second system, or is it a software<BR>
lockout?)<BR>
<BR>
The other question I have is: why is the system failing over?&nbsp; Other<BR>
than testing, a well built HA system should almost *never* actually<BR>
need to fail over. (we're not talking Windows servers here :-} )&nbsp; HA<BR>
should be like insurance ... You pay up front for it and work to make<BR>
sure that you never actually have to use what you've paid for.<BR>
<BR>
<BR>
On 4/3/07, Paul Fitzmaurice &lt;pfitzmaurice@aveksa.com&gt; wrote:<BR>
&gt; I am having problems when using a Dell PowerVault MD3000 with multipath from<BR>
&gt; a Dell PowerEdge 1950.&nbsp; I have 2 cables connected and mount the partition \
on<BR> &gt; the DAS Array.&nbsp; I am using RHEL 4.4 with RHCS and a two node \
cluster.&nbsp; Only<BR> &gt; one node is &quot;Active&quot; at a time, it creates a \
mount to the partition, and if<BR> &gt; there is an issue RHCS will fence the device \
and then the other node will<BR> &gt; mount the partition.<BR>
&gt;<BR>
&gt; I have now run into a problem twice where my ext3 (with Journaling) has<BR>
&gt; corrupt inodes.&nbsp; This actually has resulted in a filesystem with \
#xxxxxxxxx<BR> &gt; files and directories.<BR>
<BR>
</FONT>
</P>

</BODY>
</HTML>



_______________________________________________
Ext3-users mailing list
Ext3-users@redhat.com
https://www.redhat.com/mailman/listinfo/ext3-users
--===============1949785268==--


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic