[prev in list] [next in list] [prev in thread] [next in thread]
List: toasters
Subject: Re: RAC install on NetApp
From: tmacmd () gmail ! com
Date: 2008-03-21 16:30:53
Message-ID: 179510826-1206117051-cardhu_decombobulator_blackberry.rim.net-639211888- () bxe005 ! bisx ! prod ! on ! blackberry
[Download RAW message or body]
http://www.specbench.org
Remember to look at the "real" numbers.
In other words, how many filesystems are used
(1 in the case of the 24 node GX cluster, 2 in the case of most if not all netapp \
clusters, 1 in the case of nearly all the rest of the netapp cases, everybody else \
usually use an obscene/unrealistic number of filesystems)
The one real flaw I think is in this benchmark is that it pretty much evenly \
distributes I/O evenly across all filesystems. Sent from my Verizon Wireless \
BlackBerry
-----Original Message-----
From: "Robert K. Borowicz" <rob-7704@austin.rr.com>
Date: Fri, 21 Mar 2008 10:32:05
To:toasters@mathworks.com
Subject: Re: RAC install on NetApp
Toast{ed} types,
What is the benchmark site or reference paper I should look at for NFS
perf numbers for filers?
The below scenario is getting interesting in that the NFS clients (RAC
hosts) are Sun T2000's with known on-board GigE driver issues. We are
patching as I type...
As a brainless test this morning I took *off* the "noac" mount setting
and get 4-5X the perf then with it.
I see myself going down that fabled path of "well if its a 3020 with
GigE and CAT 6509 infra no jumbo frames you *should* get X perf" It
would be good to compare with some conservative and reasonable numbers
somebody else already wrote down.
And no EMC guys allowed to play here.
;-)
TIA as always,
-Rob
>
>
> -----Original Message-----
> From: Robert K. Borowicz [mailto:rob-7704@austin.rr.com]
> Sent: Wednesday, March 19, 2008 10:32 AM Pacific Standard Time
> To: toasters@mathworks.com
> Subject: RAC install on NetApp
>
> Toasters,
>
> My DBA just came to me and complained that a Ora RAC 10g build took 18
> hours when using a Filer NFS mount. Seems I've heard of this issue
> before but forget the fix. Has someone seen this before ?
>
> TIA
>
> -Rob
>
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic