[prev in list] [next in list] [prev in thread] [next in thread] 

List:       opensolaris-storage-discuss
Subject:    Re: [storage-discuss] Update to 6140 firmware for PSARC 2007/171:
From:       Albert Chin <opensolaris-storage-discuss () mlists ! thewrittenword ! com>
Date:       2007-07-10 17:35:12
Message-ID: 20070710173511.GB53978 () mail1 ! thewrittenword ! com
[Download RAW message or body]

On Tue, Jul 10, 2007 at 12:43:13PM -0400, Jonathan Edwards wrote:
> 
> On Jul 10, 2007, at 12:01, Albert Chin wrote:
> 
> > Dunno if this is the correct forum but any plans to update the 6140
> > firmware to allow specifying how much of the 6140 NVRAM can be
> > dedicated to each LUN? This would allow dedicating some of the NVRAM
> > to the ZIL and the remaining to the existing LUNs on the system.
> 
> ...
> 
> much of zfs design is currently aimed at offloading caching
> strategies back onto the host, and simply offloading the zil onto a
> dedicated cache  friendly device may not necessarily yield the types
> of gains that some are  expecting to see .. the real benefit that i
> see here is extending the cache  strategies on the host side and
> possibly the future integration of the filesystem  directly onto the
> host controller - that seems like a more lucrative endeavor  to me.

Once b68 is released, we plan to run tests with ZIL disabled, a
RAM-backed ZIL using ramdiskadm(1m), and a NVRAM-backed ZIL using the
NVRAM on the 6140.

Early on, we did see a noticeable difference with ZIL disabled. So,
I'm thinking a RAM-backed ZIL should move us closer to that number.

> > Currently, if one wants ZIL in NVRAM, the 6140 NVRAM must be disabled
> > on all LUNs except for the LUN being used for the ZIL, where one can
> > dedicate all the NVRAM.
> 
> right - i've been tracking it .. with the 6140 though it seems like
> it would be a waste since for your normal data LUNs you might be
> better off with  non-RAID devices if you're planning on using raidz
> or any of the self healing  pieces .. so you're really paying a lot
> of money for an engenio raid ctlr  you're not going to use that
> much, and for cache that you're attempting to throw  at the zil to
> offload the design inefficiencies there in the current incarnation.

Well, we already bought the 6140 so ... :)

And, we are using non-RAID devices. We're just using the 6140 as a
JBOD since we want RAID-Z2.

Hopefully one day Sun comes out with a better hardware combo using ZFS
to compete with Netapp.

> now if you do create RAID LUNs for the non zil devices and choose
> not to use the array cache - your performance there may be terrible
> if  you're not very well block aligned (the controller does a decent
> job of  coalescing IO in cache to get more efficient full stripe
> commits to and from the  drives with the write cache turned on), and
> your R/M/W numbers would probably reflect  this - given the fact
> that zfs blocksizes are dynamic (up to 128K) based  on the
> application workload and the size or frequency of the txg commit ..
> you may find that different applications will perform quite
> differently here.

Yeah, I thought about this. That's why I kinda want to leave some
NVRAM for the LUNs. But, that's not an option.

-- 
albert chin (china@thewrittenword.com)
_______________________________________________
storage-discuss mailing list
storage-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/storage-discuss
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic