[prev in list] [next in list] [prev in thread] [next in thread] 

List:       veritas-ha
Subject:    Re: [Veritas-ha] I/O Fencing - Number of Coordinator disks from	mo
From:       "Andry" <andry () optimacomputer ! com>
Date:       2008-11-15 14:42:38
Message-ID: WYyJmjwxmlwM.wh3QJMF9 () smtp ! klub-mentari ! com
[Download RAW message or body]

you can add one small shared disk like multipack. but I don't know is it scsi-3 or just scsi-2.
Thanks and Regards,

Andry Hartawan
PT Optima Bisnis Integra
Authorized Sun and Symantec Education Center
Rasuna Office Park LO-O9
Jalan Rasuna Said
Jakarta 12960
T. (62-21) 83786480-81 | F. 83786470
E. andry@optimacomputer.com
YM. optimacomputer
Skype. andry.hartawan 

- original message -
Subject:	Re: [Veritas-ha] I/O Fencing - Number of Coordinator disks from more than one array
From:	"John Cronin" <jsc3iii@gmail.com>
Date:		14/11/2008 8:49 pm

I was thinking something similar - be prepared to use different coordinator
disks if the array with a majority of your disks goes down - I wonder if you
can configure "hot spares" in the coordinator disk group?

On Fri, Nov 14, 2008 at 1:20 PM, Rich Whiffen <rwhiffen@gmail.com> wrote:

> That's the page 148 I was mentioning. If no one tries to leave or join the
> cluster, everything is fine.  If a node does leave the cluster, all the
> nodes will go down because none of them will be able to obtain the simple
> majority (they'll basically take themselves out of the running because they
> know they can't win).   If you look at the Veritas Cluster Server
> Installation Guide 5.0 pages 144 to 148 they have a very helpful table on
> what happens in the various scenarios. So long as the heardbeat links are up
> and the nodes handle the loss of the array gracefully it shouldn't be a
> problem.  But if there is a problem, you won't be able to come up because
> you have no coordinator disks.  You could pre-allocate another set of three
> from the other EVA but not use them and then put them into service on an
> emergency basis to get the cluster back up.  Since you'd know all the disk
> info ahead of time, you could make a 'run book' of what would need to happen
> in that case in line-by-line detail.
>
> Rich
>
>
>
> On Fri, Nov 14, 2008 at 1:06 PM, <ketan.patel@uk.nomura.com> wrote:
>
>>  Thanks Rich/John for your suggestions, that information is valuable.
>> Option A I mentioned is ruled out. Option C is dodgy but should work. Any
>> idea about option B – how will cluster behave when the array with all 3
>> coordinator disks itself goes down? Will both the nodes panic and crash?
>>
>>
>>
>> Ketan
>>
>>
>>
>> *From:* Rich Whiffen [mailto:rwhiffen@gmail.com]
>> *Sent:* 14 November 2008 17:08
>> *To:* Patel, Ketan (IT/UK)
>> *Cc:* veritas-ha@mailman.eng.auburn.edu
>> *Subject:* Re: [Veritas-ha] I/O Fencing - Number of Coordinator disks
>> from more than one array
>>
>>
>>
>> I don't have a good answer on the coordinator disk issue.   Keep in mind
>> the more disks you have, the more rounds of roshambo your servers could end
>> up having to have to play before coming up (in this case it's the Southpark
>> version, not Rock-Paper-Scissors version).  So the smallest number of disks
>> (3 generally) is preferable.  In all three of your examples there is a case
>> where you can't come up because a majority can not be obtained with out both
>> arrays being online. Check out page 144 - 148 of the VCS 5.0 setup guide:
>> http://support.veritas.com/docs/283868 (page 148 in particular).  I don't
>> think there' s a perfect answer for you.
>>
>>
>> As for the HCL, I'd start here (there's links to a bunch of relevent docs
>> on this page):
>> http://seer.entsupport.symantec.com/docs/307438.htm
>>
>> The Specific document you want, I believe, is:
>> http://seer.entsupport.symantec.com/docs/283161.htm
>>
>> It's says supported for Active/Active with Fencing with one footnote.  The
>> ASL must be installed:
>> http://seer.entsupport.symantec.com/docs/300657.htm
>>
>> Rich
>>
>> On Fri, Nov 14, 2008 at 11:07 AM, <ketan.patel@uk.nomura.com> wrote:
>>
>> Greetings!
>>
>> I need some advice about setting up coordinator disks. I searched through the
>> archive of the mailing list but couldn't locate the exact answer and
>> hence posting the question here.
>>
>> I am setting up a new 2 node cluster – Sol 10, VCS 5.0, VM 5.0, storage
>> from 2 x HP EVA 8100s. As per the standard setup, we present similar LUNs
>> from both the EVAs and mirror them on the host using VM. I am fine with configuration
>> of data disks. However, I am not sure about coordinator disks for i/o
>> fencing.
>>
>> The guide says I should have 3 coordinator disks. Which way should I
>> allocate the coordinator disks in my case?
>>
>> Option a – 3 disks from one EVA and 4 disks from another EVA (can't mirror
>> them using VM because no volumes being created), total 7 disks in
>> vxfenddg – an odd number of luns as required
>>
>> Option b – 3 disks from only one EVA, total 3 disks in vxfenddg
>>
>> Option c – 1 disk from one EVA and 2 disks from the other EVA, total 3
>> disks in vxfenddg
>>
>> Even before I begin, is there a way to confirm whether HP EVA 8100 are
>> SCSI3-PR compatible? HCL for Sol 10, VCS 5?
>>
>> I hope someone from the mailing list with similar set up as ours should
>> be able to give some valuable advice.
>>
>> Thanks,
>>
>> Ketan
>>
>>
>>
>> This e-mail (including any attachments) is confidential, may contain
>>
>> proprietary or privileged information and is intended for the named
>>
>> recipient(s) only. Unintended recipients are prohibited from taking action
>>
>>
>> on the basis of information in this e-mail and must delete all copies.
>>
>> Nomura will not accept responsibility or liability for the accuracy or
>>
>> completeness of, or the presence of any virus or disabling code in, this
>>
>> e-mail. If verification is sought please request a hard copy. Any
>> reference
>>
>> to the terms of executed transactions should be treated as preliminary
>> only
>>
>> and subject to formal written confirmation by Nomura. Nomura reserves the
>>
>> right to monitor e-mail communications through its networks (in accordance
>>
>>
>> with applicable laws). No confidentiality or privilege is waived or lost
>> by
>>
>> Nomura by any mistransmission of this e-mail. Any reference to "Nomura" is
>>
>>
>> a reference to any entity in the Nomura Holdings, Inc. group. Please read
>>
>> our Electronic Communications Legal Notice which forms part of this
>> e-mail:
>>
>> http://www.Nomura.com/email_disclaimer.htm<http://www.nomura.com/email_disclaimer.htm>
>>
>>
>> _______________________________________________
>> Veritas-ha maillist  -  Veritas-ha@mailman.eng.auburn.edu
>> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-ha
>>
>>
>>
>>
>> --
>> Rich Whiffen
>> rich@whiffen.org
>> http://rich.whiffen.org
>>
>>
>> This e-mail (including any attachments) is confidential, may contain
>> proprietary or privileged information and is intended for the named
>> recipient(s) only. Unintended recipients are prohibited from taking action
>>
>> on the basis of information in this e-mail and must delete all copies.
>> Nomura will not accept responsibility or liability for the accuracy or
>> completeness of, or the presence of any virus or disabling code in, this
>> e-mail. If verification is sought please request a hard copy. Any
>> reference
>> to the terms of executed transactions should be treated as preliminary
>> only
>> and subject to formal written confirmation by Nomura. Nomura reserves the
>> right to monitor e-mail communications through its networks (in accordance
>>
>> with applicable laws). No confidentiality or privilege is waived or lost
>> by
>> Nomura by any mistransmission of this e-mail. Any reference to "Nomura" is
>>
>> a reference to any entity in the Nomura Holdings, Inc. group. Please read
>> our Electronic Communications Legal Notice which forms part of this
>> e-mail:
>> http://www.Nomura.com/email_disclaimer.htm<http://www.nomura.com/email_disclaimer.htm>
>>
>
>
>
> --
> Rich Whiffen
> rich@whiffen.org
> http://rich.whiffen.org
>
> _______________________________________________
> Veritas-ha maillist  -  Veritas-ha@mailman.eng.auburn.edu
> http://mailman.eng.auburn.edu/mailman/listinfo/veritas-ha
>
>


_______________________________________________
Veritas-ha maillist  -  Veritas-ha@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-ha



_______________________________________________
Veritas-ha maillist  -  Veritas-ha@mailman.eng.auburn.edu
http://mailman.eng.auburn.edu/mailman/listinfo/veritas-ha

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic