[prev in list] [next in list] [prev in thread] [next in thread] 

List:       linux-nfs
Subject:    Re: [NFS] Statd problem
From:       Beschorner Daniel <Daniel.Beschorner () facton ! com>
Date:       2005-03-30 9:43:28
Message-ID: BB1F4EFD574E9F409035BDA71C961250023B08 () exchange ! i-bn
[Download RAW message or body]

Hi Kiran,

sm.bak is empty, on the server and the clients.
I will post tomorrow if 2.6.10 makes a difference.

Thank you
Daniel


----------
Hi Daniel , 
 That was a typo.
 Remove files from sm.bak if any.
 This problem existed on solaris 8 as well.
 Hopefully , removing these files will solve the
 problem.
regards,
 kiran
--- Beschorner Daniel <Daniel.Beschorner@facton.com>
wrote:
> The AIX clients only contain an empty file named as
> the server in their sm
> directory.
> rpcinfo -p shows nlockmgr vers 1/3/4 tcp/udp running
> on the server.
> nfs-utils are 1.0.7.
> 
> I got a a similar problem (same messages) in past
> with kernel 2.2 and lockd
> zombies.
> All is working fine, but the messages started in
> march so I assumed that
> there was a change between 2.6.10 and 11?!?
> 
> For the easiest answer to this question, I will do a
> rollback to 2.6.10 and
> see.
> 
> Thanks
> Daniel
> 
> 
> 
> ----------
> Hi Daniel , 
>     lockd has rpc service no. 100021 . This message
> means that statd is not able to talk to statd.
> Are there any files on client side in
> /var/lib/nfs/sm
> when it restarts ?
> 
> Try deleting all this files before nfs restarts on
> client.
> 
> Also see that  lockd is registered with
> portmapper.(use command rpcinfo -p)
> 
> regards,
>  --kiran
> 
> 
> 
> --- Beschorner Daniel <Daniel.Beschorner@facton.com>
> wrote:
> > Since the first reboot of our clients (mainly AIX)
> > after we updated our NFS
> > server from 2.6.10 to 2.6.11 we get sporadically
> > these server log entries, I
> > think after each client reboot:
> > 
> > Mar 29 12:44:31 server rpc.statd[3201]: recv_rply:
> > [127.0.0.1] RPC status 1
> > Mar 29 12:44:49 server last message repeated 3
> times
> > Mar 29 12:44:55 server rpc.statd[3201]: Can't
> > callback server (100021,4),
> > giving up.
> > 
> > What do they mean? No further problems are seen.
> > Mounts are TCP/V3.
> > 
> > Thanks
> > Daniel
> > 
> > 
> >
>
-------------------------------------------------------
> > SF email is sponsored by - The IT Product Guide
> > Read honest & candid reviews on hundreds of IT
> > Products from real users.
> > Discover which products truly live up to the hype.
> > Start reading now.
> >
>
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
> > _______________________________________________
> > NFS maillist  -  NFS@lists.sourceforge.net
> > https://lists.sourceforge.net/lists/listinfo/nfs
> > 
> 
> 
> 		
> __________________________________ 
> Do you Yahoo!? 
> Yahoo! Small Business - Try our new resources site!
> http://smallbusiness.yahoo.com/resources/ 
> 
> 
>
-------------------------------------------------------
> SF email is sponsored by - The IT Product Guide
> Read honest & candid reviews on hundreds of IT
> Products from real users.
> Discover which products truly live up to the hype.
> Start reading now.
>
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
> _______________________________________________
> NFS maillist  -  NFS@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/nfs
> 


		
__________________________________ 
Do you Yahoo!? 
Yahoo! Small Business - Try our new resources site!
http://smallbusiness.yahoo.com/resources/ 


-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
NFS maillist  -  NFS@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic