[prev in list] [next in list] [prev in thread] [next in thread] 

List:       gluster-users
Subject:    [Gluster-users] Memory leak with glusterfs NFS on 3.2.6
From:       rajesh () redhat ! com (Rajesh Amaravathi)
Date:       2012-06-26 13:18:24
Message-ID: b53b3601-14b0-4a75-b050-1feebb483d62 () zmail03 ! collab ! prod ! int ! phx2 ! redhat ! com
[Download RAW message or body]

I found some memory leaks in 3.2 release, which, overtime, add up to a lot of \
leakage, but they are fixed in 3.3. We will fix them in 3.2 too, but the best \
option,IMO would be to upgrade to 3.3. Please let us know if you find any in 3.3 too. \



Regards, 
Rajesh Amaravathi, 
Software Engineer, GlusterFS 
RedHat Inc. 
----- Original Message -----

From: "Philip Poten" <philip.poten at gmail.com> 
To: "Rajesh Amaravathi" <rajesh at redhat.com> 
Cc: gluster-users at gluster.org 
Sent: Thursday, June 21, 2012 1:03:53 PM 
Subject: Re: [Gluster-users] Memory leak with glusterfs NFS on 3.2.6 

Hi Rajesh, 

We are handling only small files up to 10MB mainly in the 5-250kB range - in short, \
images in a flat structure of directories. Since there is a varnish setup facing the \
internet, my guess would be that reads and writes are somwhat balanced, i.e. not in \
excessive relation to each other. But still way more reads than writes. 

Files are almost never truncated, altered or deleted. I'm not sure if the backend \
writes resized images by creating and renaming them on gluster or by moving them onto \
gluster. 

The munin graph looks as if the memory consumption grows faster during heavy usage. 

"gluster volume top operations" returns with the usage help, so I can't help you with \
that. 


Options Reconfigured: 

performance.quick-read: off 
performance.cache-size: 64MB 
performance.io-thread-count: 64 
performance.io-cache: on 
performance.stat-prefetch: on 


I would gladly deploy a patched 3.2.6 deb package for better debugging or help you \
with any other measure that doesn't require us to take it offline for more than a \
minute. 


thanks for looking into that! 


kind regards, 
Philip 


2012/6/21 Rajesh Amaravathi < rajesh at redhat.com > 
> 
> Hi all, 
> I am looking into this issue, but could not make much from the statedumps. 
> I will try to reproduce this issue. If i know what kind of operations (reads, \
> writes, metadata r/ws, etc) are being done,  and if there are any other \
> configuration changes w.r.t GlusterFS, it'll be of great help.  
> Regards, 
> Rajesh Amaravathi, 
> Software Engineer, GlusterFS 
> RedHat Inc. 
> ________________________________ 
> From: "Xavier Normand" < xavier.normand at gmail.com > 
> To: "Philip Poten" < philip.poten at gmail.com > 
> Cc: gluster-users at gluster.org 
> Sent: Tuesday, June 12, 2012 6:32:41 PM 
> Subject: Re: [Gluster-users] Memory leak with glusterfs NFS on 3.2.6 
> 
> 
> Hi Philip, 
> 
> I do have about the same problem that you describe. There is my setup: 
> 
> Gluster: Two bricks running gluster 3.2.6 
> 
> Clients: 
> 4 clients running native gluster fuse client. 
> 2 clients running nfs client 
> 
> My nfs client are not doing that much traffic but i was able to view after a couple \
> days that the brick used to mount the nfs is having memory issue.  
> i can provide more info as needed to help correct the problem. 
> 
> Thank's 
> 
> Xavier 
> 
> 
> 
> Le 2012-06-12 ? 08:18, Philip Poten a ?crit : 
> 
> 2012/6/12 Dan Bretherton < d.a.bretherton at reading.ac.uk >: 
> 
> I wonder if this memory leak is the cause of the NFS performance degradation 
> 
> I reported in April. 
> 
> 
> That's probable, since the performance does go down for us too when 
> the glusterfs process reaches a large percentage of RAM. My initial 
> guess was that it's the file system cache that's being eradicated, 
> thus iowait increases. But a closer look at our munin graphs implies, 
> that it's also the user space that eats more and more CPU 
> proportionally with RAM: 
> 
> http://imgur.com/a/8YfhQ 
> 
> There are two restarts of the whole gluster process family visible on 
> those graphs: one a week ago at the very beginning (white in the 
> memory graph, as munin couldn't fork all it needed), and one 
> yesterday. The drop between 8 and 9 was due to a problemv unrelated to 
> gluster. 
> 
> Pranith: I just made one dump, tomorrow I'll make one more and mail 
> them both to you so that you can compare them. While I just restarted 
> yesterday, the leak should be visible, as the process grows a few 
> hundred MB every day. 
> 
> thanks for the fast reply, 
> Philip 
> _______________________________________________ 
> Gluster-users mailing list 
> Gluster-users at gluster.org 
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users 
> 
> 
> 
> _______________________________________________ 
> Gluster-users mailing list 
> Gluster-users at gluster.org 
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120626/6fb3e2d6/attachment.html>



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic