[prev in list] [next in list] [prev in thread] [next in thread] 

List:       npaci-rocks-discussion
Subject:    Re: [Rocks-Discuss] Multiple Drives for Home Directories
From:       David Black-Schaffer <davidbbs () stanford ! edu>
Date:       2007-05-29 16:57:03
Message-ID: 85C7715C-654D-4E49-AC2E-741B4D1EF125 () stanford ! edu
[Download RAW message or body]

Mike,
	I'm going to second the earlier comments that this sounds like a bad  
idea. You would be much better off taking those 4 160GB drives and  
putting them in one machine as a RAID 10 (or 5) setup and using that  
as a dedicated file server. Your performance might be better the way  
you've outlined if you're not saturating your network switch  
(unlikely) and the load is distributed across the home directories  
(very unlikely), but not otherwise.

	With the setup you've outlined you'll have no redundancy for your  
user data. 160GB+ drives are cheap and the cost of recovering the  
data when they fail will easily outweigh the cost of putting in a  
decent RAID setup. (And they will fail.)

	My current setup is a 10-disk RAID 50 with 2 hot spares in a 3U box  
with a 3ware SATA RAID card. This has worked great for us (one drive  
failed and it automatically re-built the array with the hot spare  
allowing me to replace the failed drive seamlessly), but there are  
two things I wish I could work around:

	The first is that NFS under linux does a terrible job of balancing  
load. If one user requests a lot of I/O and saturates the switch/disk  
(it's easy to saturate GB ethernet with 10 drives) all other users  
experience file system stalls for many seconds until their I/O makes  
it into the queue. I wish the NFS server would do a better job of  
interleaving requests from different users and assigning a fair  
priority. (If anyone knows how to fix this I'd love to hear from  
them!) This is one benefit of getting a NetApps box, I believe.

	The second problem I have is that there is no easy way to increase  
the capacity. We currently have 12 200GB drives, and when I want to  
replace them with 1TB drives I'm going to have to move all the data  
off the array, put in the new drives, and then move it all back. I  
wish I had setup this machine as a Solaris box with ZFS as this would  
then be trivial. If you're building an array setup I would highly  
recommend considering (and testing) how well Solaris would integrate  
with ROCKs. ZFS has a lot of compelling features and I know at least  
one person who uses it regularly and has only good things to say  
about it. (I can easily mount the NFS export from his ZFS array on my  
ROCKs cluster and it works fine, but I've only done limited testing.)  
Since it's all through NFS I suspect it would work just fine with  
ROCKs, but you'd want to test it first.

	-David



> The cluster has 64 nodes, 4 of these nodes have a 160 GB /dev/hda hard
> drive. The rest have smaller hard drives.
>
> The owner of the cluster would like to use 1 of the 4 as the head  
> node,
> and the other 3 (with larger hard drives) as nfs mounted file systems
> for users home directories.


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic