[prev in list] [next in list] [prev in thread] [next in thread] 

List:       gpfsug-discuss
Subject:    [gpfsug-discuss] GPFS 4.2 / Protocol nodes
From:       Robert.Oesterlin () nuance ! com (Oesterlin, Robert)
Date:       2016-01-12 13:40:27
Message-ID: F022061C-E1B4-476F-A2CD-E9267CABB1D7 () nuance ! com
[Download RAW message or body]

My experience is that in small clusters, it?s acceptable to double up on some of \
these services, but as you scale up, breaking them out makes more sense. Spectrum \
Scale make it easy to add/remove the nodes non-disruptively, so you can move them to \
dedicated nodes. When I first started testing 4.2, I setup a 6 node cluster that had \
both NSD and CES on them, and it did just fine. The nodes were 4-core 32GB and I had \
NFS and Object running on the CES nodes. The new CES nodes run a ton more services, \
especially when you start working with Object.

Both of you points are valid considerations ? especially with CNFS. I?m running \
multiple CNFS clusters and having them broken out has save me a number of times.

Bob Oesterlin
Sr Storage Engineer, Nuance HPC Grid



From: <gpfsug-discuss-bounces@spectrumscale.org<mailto:gpfsug-discuss-bounces at \
spectrumscale.org>> on behalf of Daniel Kidger <daniel.kidger at \
                uk.ibm.com<mailto:daniel.kidger at uk.ibm.com>>
Reply-To: gpfsug main discussion list <gpfsug-discuss at \
                spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Date: Monday, January 11, 2016 at 8:42 AM
To: "gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>" \
                <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at \
                spectrumscale.org>>
Cc: "gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>" \
                <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at \
                spectrumscale.org>>
Subject: Re: [gpfsug-discuss] GPFS 4.2 / Protocol nodes

It looks like no one has attempted to answer this, so I will step in to start the \
conversation.

There are two issues when considering how many services to run on the same nodes - in \
this case the NSD servers.

1. Performance.
Spectrum Scale's (nee GPFS) core differentiator is performance. The more you run on a \
node the more that node resource's have to be shared. Here memory bandwidth and \
memory space are the main ones. CPU may also be a limited resource although with \
modern chips this is less likely so. If performance is not the key delivered metric \
then running other things on the NSD server may be a good option so save both cost \
and server spawl in small datacentres.

2. NFS server stability.
pre-4.1.1, IBM used cNFS to provide multiple NFS servers in a GPFS cluster. This used \
traditional kernel based NFS daemons. If one hung then the whole node had to be \
rebooted which might have led to disruption in NSD serving if the other NSD server of \
a pair was already under load. With 4.1.1 came Cluster Export Services (CES) deliverd \
from 'Protocol Nodes'. Since there use Ganesha there would be no need to reboot this \
node if the NFS serving hung and in Ganesha, all NFS activity is in userspace not the \
kernel.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20160112/ffe59582/attachment.html>



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic