[prev in list] [next in list] [prev in thread] [next in thread] 

List:       npaci-rocks-discussion
Subject:    [Rocks-Discuss] Re: rocks7 install can I restore the rocks 6 users
From:       "Cooper, Trevor" <tcooper () sdsc ! edu>
Date:       2019-06-30 17:55:57
Message-ID: BYAPR04MB420012EC1D9574F465C3264BD7FE0 () BYAPR04MB4200 ! namprd04 ! prod ! outlook ! com
[Download RAW message or body]

So I'm guessing here but I think you must have changed .local to .puppet when setting \
up your private network in the installer.

If you don't specifically remember doing it later then that is likely where it \
happened.

In any case, if you've made changes to your nfs exports and/or autofs configuration \
and things are now working then that is a red-herring.

Your cluster frontend is responsible for DNS for the 'private' network (named \
.puppet). That is why ServeDNS is set to true on that network.

You probably don't want to change the setting for your public network because DNS is \
likely handled by your campus IT.

I don't see any errors in your provided rocks sync users output.

You could try creating a new test user to verify new home dir is created and then \
changes are propagated to the nodes.

If that is all working you should be good to go.

Trevor Cooper, M.Sc.
HPC Systems Programmer
San Diego Supercomputer Center
________________________________
From: npaci-rocks-discussion-bounces@sdsc.edu \
<npaci-rocks-discussion-bounces@sdsc.edu> on behalf of Robert Kudyba \
                <rkudyba@fordham.edu>
Sent: Friday, June 28, 2019 7:54 PM
To: Discussion of Rocks Clusters
Subject: [Rocks-Discuss] Re: rocks7 install can I restore the rocks 6 users

> This part...
> 
> mount | grep addrita
> /dev/mapper/rocks_puppet-root on /home/addrita type ext4
> (rw,relatime,data=ordered)
> 
> Is  -nfsvers=3 necessary?
> 
> 
> ...looks like you ran it on the frontend and not a compute node. It
> matters.
> 
> Please don't remove the full prompt as it's the only way I can tell where
> you're running commands if you don't add that as a comment.
> 


I didn't remove anything I guess it's because as you say I ran it on the
front end. Here are the results of the command from the line working node:

mount | grep addrita

puppet.puppet:/export/home/addrita on /home/addrita type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,re \
trans=2,sec=sys,mountaddr=10.1.1.1,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=10.1.1.1)


> 
> Out of curiosity... did you rename your private (i.e. .local) network to
> .puppet or did you name your frontend puppet.puppet when you installed it?
> 

I don't recall doing that and was wondering where that happened. Perhaps
during the Anaconda install from the DVD.


> What is the output of...
> 
> $ hostname -f
> 
> ...on your frontend and also...
> 

hostname -f

puppet.cis.fordham.edu



> $ rocks list network
> 
> ...on your frontend?
> 

 rocks list network

NETWORK  SUBNET    NETMASK       MTU   DNSZONE         SERVEDNS

private: 10.1.1.0  255.255.255.0 1500  puppet          True

public:  10.10.5.0 255.255.255.0 1500  cis.fordham.edu False



And I'd actually like DNS on the public side.


> I can't think of a reason why but I wonder if that is causing some kind of
> weird behavior looking up hosts...
> 

 I'm not tied to it that's for sure just let me know the best way to set
it. I see in /etc/hosts:

10.1.1.1 puppet.puppet puppet

I've been trying different things and found this thread:
https://lists.sdsc.edu/pipermail/npaci-rocks-discussion/2018-March/071676.html

So in /etc/auto.share I put:

apps puppet.puppet:/state/partition1/&


I restarted autos and nfs, and the apps and user dirs are available.


However rocks sync users still errors with:

Size: 21847/15871 bytes (encrypted/plain)

/opt/rocks/sbin/411put --nocomment /etc/group

Event '411Alert' dispatched! Coalescing enabled: false

411 Wrote: /etc/411.d/etc.group

Size: 14054/10102 bytes (encrypted/plain)

/opt/rocks/sbin/411put --nocomment /etc/shadow

Event '411Alert' dispatched! Coalescing enabled: false

411 Wrote: /etc/411.d/etc.shadow

Size: 27046/19715 bytes (encrypted/plain)

make[1]: Leaving directory `/var/411'

make: Leaving directory `/var/411'


And I'm 99.9% sure all users, i.e., that have a home directory, have IDs >
1,000.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.sdsc.edu/pipermail/npaci-rocks-discussion/attachments/20190628/c5c524ef/attachment.html
                
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.sdsc.edu/pipermail/npaci-rocks-discussion/attachments/20190630/8c326ca9/attachment.html \



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic