[prev in list] [next in list] [prev in thread] [next in thread] 

List:       kde-pim
Subject:    Re: [Kde-pim] Review Request: Fix ETM memory leak: m_Items hash table was never cleaned up
From:       "Milian Wolff" <mail () milianw ! de>
Date:       2012-10-16 22:21:08
Message-ID: 20121016222108.17392.2822 () vidsolbach ! de
[Download RAW message or body]



> On Oct. 16, 2012, 5:40 p.m., Alex Fiestas wrote:
> > I have applied the patch and modified MAXBUFFERS to 1 (expecting to get only 1 folder \
> > cached). 
> > Then loaded in this order:
> > Inbox (KMail using 70Mb, folder has around 2K of emails)
> > KDE-Commits (KMail going up to 800Mb, folder has roughly  200K of emails)
> > Inbox
> > Akadmey (this folder has around 20Msg)
> > 
> > KMail stayed using 860Mb of ram.

Don't you have favorite folders? Where they all loaded? Can you compare the memory consumption \
(without changing MAXBUFFERS) before and after applying this patch and skimming through the \
same set of folders? Also please try to track whether the memory consumption further increases \
after you've loaded one of these massive folders once or whether it levels out.

Also note how my two memory profiles compare: While the one levels out and stays much lower \
than the other, it doesn't go down either. Something which I could so far only explain through \
memory fragmentation... or other unexpected issues which where not apparent in massif.

To sum up: I think this patch is an improvement yet it might be that there are more places of \
undesired implicit sharing and thus "leaking".


- Milian


-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
http://git.reviewboard.kde.org/r/106832/#review20460
-----------------------------------------------------------


On Oct. 15, 2012, 12:13 p.m., Milian Wolff wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> http://git.reviewboard.kde.org/r/106832/
> -----------------------------------------------------------
> 
> (Updated Oct. 15, 2012, 12:13 p.m.)
> 
> 
> Review request for KDEPIM, KDEPIM-Libraries and Stephen Kelly.
> 
> 
> Description
> -------
> 
> As it turned out, the ETM kept filling its QHash<Item::Id, Item> m_items without ever \
> cleaning it up. This resulted in increasing memory consumption of KMail until eventually all \
> collections where fetched at least once by the ETM. Note: It's not a true leak, i.e. the data \
> is still reachable and gets properly cleaned up on close. But during runtime the ever \
> increasing memory consumption is easily noticeable. 
> Generally this patch ensures that the existing cache code is actually working. Maybe the old \
> code was working once, but a regression due to the patch that moved some of that code from \
> ETMPrivate to MonitorPrivate::PurgeBuffer crept in? 
> Anyhow, this patch contains basically of these parts:
> 
> a) Ensure that the SelectionProxyModel properly derefs root indices in its dtor. Should be a \
> non-brainer, compare to the ctor e.g. If this is not done, collections will always stay \
> reffed and thus never cleared up. 
> b) Fix the implementation of the PurgeBuffer. The old code was as far as I could see non \
> functional. The safety calls to ::purge from ::buffer e.g. purged much more than just the one \
> requested ID from the buffer. This in turn resulted in ::buffer never returning something \
> besides "-1". 
> Instead, the code is now simplified by using a QQueue FIFO. Considering though that this \
> change was mostly based on how I thought the buffer *should* behave like, this might need \
> some reviewing. Note that I didn't find the documentation sufficient. The buffer, as I've \
> written it now, is now working as following: 
> PurgeBuffer::purge(id) -> just remove id from FIFO
> PurgeBuffer::buffer(id) -> 
> - remove id if it already is tracked (i.e. purge(id))
> - ensure the FIFO is not bigger than X items, if it is, dequeue and purge first item
> - enqueue id to FIFO
> 
> c) Fix the implementation of ETMPrivate::removeItems and ::purgeItems such that it does not \
> keep iterators around which get invalidated once we call .erase on the parent container. This \
> would yield double deletion runtime errors and the like. 
> d) Fix the memory leak by actually removing items from m_items in ::removeItems. This \
> requires us to remove the purged'd collection id from m_populatedCols. 
> Note: This patch should be the basis for evaluating the caching parameters (MAXITEMS=10000 \
> and MAXBUFFERSIZE=10). Especially I think we should investigate whether the MAXITEMS should \
> not take precedence over MAXBUFFERSIZE, such that e.g. opening 10 10k folders does not result \
> in 100k of items in memory, but instead only 10k items. 
> Note: Favorited folders are always reffed and thus never cleared from the cache.
> 
> 
> Diffs
> -----
> 
> akonadi/entitytreemodel_p.cpp bc2ca28 
> akonadi/monitor_p.h b5d0984 
> akonadi/monitor_p.cpp b2e85eb 
> akonadi/selectionproxymodel.cpp 72ca71a 
> 
> Diff: http://git.reviewboard.kde.org/r/106832/diff/
> 
> 
> Testing
> -------
> 
> Ran it against my local setup. The memory consumption stays roughly constant over time now \
> when reading all my folders, instead of increasing over time until all collections have been \
> seen. 
> To reproduce: Open KMail, go from one collection to the next until you have visited all. Note \
> the memory consumption intermittently. Before: Memory consumption increases until the end. \
> After: Memory consumption will reach a peak and go up and down as you open your collectoins. 
> 
> Screenshots
> -----------
> 
> memory consumption before patch
> http://git.reviewboard.kde.org/r/106832/s/774/
> memory consumption after patch
> http://git.reviewboard.kde.org/r/106832/s/775/
> 
> 
> Thanks,
> 
> Milian Wolff
> 
> 

_______________________________________________
KDE PIM mailing list kde-pim@kde.org
https://mail.kde.org/mailman/listinfo/kde-pim
KDE PIM home page at http://pim.kde.org/


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic