[prev in list] [next in list] [prev in thread] [next in thread] 

List:       gpfsug-discuss
Subject:    Re: [gpfsug-discuss] Importing a Spectrum Scale a filesystem from 4.2.3 cluster to 5.0.4.3 cluster
From:       Chris Scott <chrisjscott () gmail ! com>
Date:       2020-06-02 13:31:05
Message-ID: CAKFPaRhZiT_3iO7imGSJ9fX6oz98+evyJfa8YqOgVLMwsuNXLA () mail ! gmail ! com
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


Hi Fred

The imported filesystem has ~1.5M files that are migrated to Spectrum
Protect. Spot checking transparent and selective recalls of a handful of
files has been successful after associating them with their correct
Spectrum Protect server. They're all also backed up to primary and copy
pools in the Spectrum Protect server so having to do a restore instead of
recall if it wasn't working was an acceptable risk in favour of trying to
persist the GPFS 3.5 cluster on dying hardware and insecure OS, etc.

Cheers
Chris

On Mon, 1 Jun 2020 at 17:53, Frederick Stock <stockf@us.ibm.com> wrote:

> Chris, it was not clear to me if the file system you imported had files
> migrated to Spectrum Protect, that is stub files in GPFS.  If the file
> system does contain files migrated to Spectrum Protect with just a stub
> file in the file system, have you tried to recall any of them to see if
> that still works?
>
> Fred
> __________________________________________________
> Fred Stock | IBM Pittsburgh Lab | 720-430-8821
> stockf@us.ibm.com
>
>
>
> ----- Original message -----
> From: Chris Scott <chrisjscott@gmail.com>
> Sent by: gpfsug-discuss-bounces@spectrumscale.org
> To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
> Cc:
> Subject: [EXTERNAL] Re: [gpfsug-discuss] Importing a Spectrum Scale a
> filesystem from 4.2.3 cluster to 5.0.4.3 cluster
> Date: Mon, Jun 1, 2020 9:14 AM
>
> Sounds like it would work fine.
>
> I recently exported a 3.5 version filesystem from a GPFS 3.5 cluster to a
> 'Scale cluster at 5.0.2.3 software and 5.0.2.0 cluster version. I
> concurrently mapped the NSDs to new NSD servers in the 'Scale cluster,
> mmexported the filesystem and changed the NSD servers configuration of the
> NSDs using the mmimportfs ChangeSpecFile. The original (creation)
> filesystem version of this filesystem is 3.2.1.5.
>
> To my pleasant surprise the filesystem mounted and worked fine while still
> at 3.5 filesystem version. Plan B would have been to "mmchfs <filesystem>
> -V full" and then mmmount, but I was able to update the filesystem to
> 5.0.2.0 version while already mounted.
>
> This was further pleasantly successful as the filesystem in question is
> DMAPI-enabled, with the majority of the data on tape using Spectrum Protect
> for Space Management than the volume resident/pre-migrated on disk.
>
> The complexity is further compounded by this filesystem being associated
> to a different Spectrum Protect server than an existing DMAPI-enabled
> filesystem in the 'Scale cluster. Preparation of configs and subsequent
> commands to enable and use Spectrum Protect for Space Management
> multiserver for migration and backup all worked smoothly as per the docs.
>
> I was thus able to get rid of the GPFS 3.5 cluster on legacy hardware, OS,
> GPFS and homebrew CTDB SMB and NFS and retain the filesystem with its
> majority of tape-stored data on current hardware, OS and 'Scale/'Protect
> with CES SMB and NFS.
>
> The future objective remains to move all the data from this historical
> filesystem to a newer one to get the benefits of larger block and inode
> sizes, etc, although since the data is mostly dormant and kept for
> compliance/best-practice purposes, the main goal will be to head off
> original file system version 3.2 era going end of support.
>
> Cheers
> Chris
>
> On Thu, 28 May 2020 at 23:31, Prasad Surampudi <
> prasad.surampudi@theatsgroup.com> wrote:
>
> We have two scale clusters, cluster-A running version Scale 4.2.3 and
> RHEL6/7 and Cluster-B running Spectrum Scale  5.0.4 and RHEL 8.1. All the
> nodes in both Cluster-A and Cluster-B are direct attached and no NSD
> servers. We have our current filesystem gpfs_4 in Cluster-A  and new
> filesystem gpfs_5 in Cluster-B. We want to copy all our data from gpfs_4
> filesystem into gpfs_5 which has variable block size.  So, can we map NSDs
> of gpfs_4 to Cluster-B nodes and do a mmexportfs of gpfs_4 from Cluster-A
> and mmimportfs into Cluster-B so that we have both filesystems available on
> same node in Cluster-B for copying data across fiber channel? If
> mmexportfs/mmimportfs works, can we delete nodes from Cluster-A and add
> them to Cluster-B without upgrading RHEL or GPFS versions for now and  plan
> upgrading them at a later time?
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
>
>
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>

[Attachment #5 (text/html)]

<div dir="ltr">Hi Fred<div><br></div><div>The imported filesystem has ~1.5M files \
that are migrated to Spectrum Protect. Spot checking transparent and selective \
recalls of a handful of files has been successful  after associating them with their \
correct Spectrum Protect server. They&#39;re all also backed up to primary and copy \
pools in the Spectrum Protect server so having to do a restore instead of recall if \
it wasn&#39;t working was an acceptable risk in favour of trying to persist the GPFS \
3.5 cluster on dying hardware and insecure OS, \
etc.</div><div><br></div><div>Cheers</div><div>Chris</div></div><br><div \
class="gmail_quote"><div dir="ltr" class="gmail_attr">On Mon, 1 Jun 2020 at 17:53, \
Frederick Stock &lt;<a href="mailto:stockf@us.ibm.com">stockf@us.ibm.com</a>&gt; \
wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px \
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr" \
style="font-family:Arial,Helvetica,sans-serif;font-size:12pt"><div dir="ltr">Chris, \
it was not clear to me if the file system you imported had files migrated to Spectrum \
Protect, that is stub files in GPFS.   If the file system does contain files migrated \
to Spectrum Protect with just a stub file in the file system, have you tried to \
recall any of them to see if that still works?</div> <div dir="ltr"><div dir="ltr" \
style="font-family:Arial,Helvetica,sans-serif;font-size:10.5pt"><div \
dir="ltr"><br><font size="2" face="Default Sans \
Serif,Verdana,Arial,Helvetica,sans-serif"><span \
style="font-size:1.143em">Fred<br>__________________________________________________<br>Fred \
Stock | IBM Pittsburgh Lab | 720-430-8821<br><a href="mailto:stockf@us.ibm.com" \
target="_blank">stockf@us.ibm.com</a></span></font></div></div></div> <div dir="ltr"> \
</div> <div dir="ltr">  </div>
<blockquote dir="ltr" style="border-left:2px solid \
rgb(170,170,170);margin-left:5px;padding-left:5px;direction:ltr;margin-right:0px">----- \
Original message -----<br>From: Chris Scott &lt;<a \
href="mailto:chrisjscott@gmail.com" \
target="_blank">chrisjscott@gmail.com</a>&gt;<br>Sent by: <a \
href="mailto:gpfsug-discuss-bounces@spectrumscale.org" \
target="_blank">gpfsug-discuss-bounces@spectrumscale.org</a><br>To: gpfsug main \
discussion list &lt;<a href="mailto:gpfsug-discuss@spectrumscale.org" \
target="_blank">gpfsug-discuss@spectrumscale.org</a>&gt;<br>Cc:<br>Subject: \
[EXTERNAL] Re: [gpfsug-discuss] Importing a Spectrum Scale a filesystem from 4.2.3 \
cluster to 5.0.4.3 cluster<br>Date: Mon, Jun 1, 2020 9:14 AM<br>   <div \
dir="ltr">Sounds like it would work fine. <div>  </div>
<div>I recently exported a 3.5 version filesystem from a GPFS 3.5 cluster to a \
&#39;Scale cluster at 5.0.2.3 software and 5.0.2.0 cluster version. I concurrently \
mapped the NSDs to new NSD servers in the &#39;Scale cluster, mmexported the \
filesystem and changed the NSD servers configuration of the NSDs using the mmimportfs \
ChangeSpecFile. The original (creation) filesystem version of this filesystem is \
3.2.1.5.</div> <div>  </div>
<div>To my pleasant surprise  the filesystem mounted and worked fine while still at \
3.5 filesystem version. Plan B would have been to &quot;mmchfs &lt;filesystem&gt; -V \
full&quot; and then mmmount, but I was able to update the filesystem to 5.0.2.0 \
version while already mounted.</div> <div>  </div>
<div>This was further pleasantly successful as the filesystem in question is \
DMAPI-enabled, with the majority of the data on tape using Spectrum Protect for Space \
Management than the volume resident/pre-migrated on disk.</div> <div>  </div>
<div>The complexity is further compounded by this filesystem being associated to a \
different Spectrum Protect server than an existing DMAPI-enabled filesystem in the \
&#39;Scale cluster. Preparation of configs and subsequent commands to enable and use \
Spectrum Protect for Space Management multiserver for migration and backup all worked \
smoothly as per the docs.</div> <div>  </div>
<div>I was thus able to get rid of the GPFS 3.5 cluster on legacy hardware, OS, GPFS \
and homebrew CTDB SMB and NFS and retain the filesystem with its majority of \
tape-stored data on current hardware, OS and &#39;Scale/&#39;Protect with CES SMB and \
NFS.</div> <div>  </div>
<div>The future objective remains to move all the data from this historical \
filesystem to a newer one to get the benefits of larger block and inode sizes, etc, \
although since the data is mostly dormant and kept for compliance/best-practice \
purposes, the main goal will be to head off original file system version 3.2 era \
going end of support. <div>  </div>
<div>Cheers</div>
<div>Chris</div></div></div>  

<div><div dir="ltr">On Thu, 28 May 2020 at 23:31, Prasad Surampudi &lt;<a \
href="mailto:prasad.surampudi@theatsgroup.com" \
target="_blank">prasad.surampudi@theatsgroup.com</a>&gt; wrote:</div> <blockquote \
style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div lang="EN-US"><div><p style="margin:0px"><span \
style="font-size:11pt">We have two scale clusters, cluster-A running version Scale \
4.2.3 and RHEL6/7 and Cluster-B running Spectrum Scale   5.0.4 and RHEL 8.1. All the \
nodes in both Cluster-A and Cluster-B are direct attached and no NSD servers. We have \
our current filesystem gpfs_4 in Cluster-A   and new filesystem gpfs_5 in Cluster-B. \
We want to copy all our data from gpfs_4 filesystem into gpfs_5 which has variable \
block size.   So, can we map NSDs of gpfs_4 to Cluster-B nodes and do a mmexportfs of \
gpfs_4 from Cluster-A and mmimportfs into Cluster-B so that we have both filesystems \
available on same node in Cluster-B for copying data across fiber channel? If \
mmexportfs/mmimportfs works, can we delete nodes from Cluster-A and add them to \
Cluster-B without upgrading RHEL or GPFS versions for now and   plan upgrading them \
at a later time?</span></p></div></div>_______________________________________________<br>gpfsug-discuss \
mailing list<br>gpfsug-discuss at <a href="http://spectrumscale.org" rel="noreferrer" \
target="_blank">spectrumscale.org</a><br><a \
href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" rel="noreferrer" \
target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a></blockquote></div>
 <div><font size="2" face="Default Monospace,Courier \
New,Courier,monospace">_______________________________________________<br>gpfsug-discuss \
mailing list<br>gpfsug-discuss at <a href="http://spectrumscale.org" \
target="_blank">spectrumscale.org</a><br><a \
href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" \
target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a>  \
</font></div></blockquote> <div dir="ltr">  </div></div><br>

_______________________________________________<br>
gpfsug-discuss mailing list<br>
gpfsug-discuss at <a href="http://spectrumscale.org" rel="noreferrer" \
target="_blank">spectrumscale.org</a><br> <a \
href="http://gpfsug.org/mailman/listinfo/gpfsug-discuss" rel="noreferrer" \
target="_blank">http://gpfsug.org/mailman/listinfo/gpfsug-discuss</a><br> \
</blockquote></div>



_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic