[prev in list] [next in list] [prev in thread] [next in thread]
List: kde-pim
Subject: Re: [Kde-pim] Problem with bulk fetching of items with 4.8
From: Kevin Krammer <kevin.krammer () gmx ! at>
Date: 2012-02-10 8:01:25
Message-ID: 201202100901.30193.kevin.krammer () gmx ! at
[Download RAW message or body]
[Attachment #2 (multipart/signed)]
Hi Shaheed,
On Saturday, 2012-02-04, Shaheed Haque wrote:
> Hi Kevin,
>
> See below...
>
> 2012/2/4 Kevin Krammer <kevin.krammer@gmx.at>:
> > Hi Shaheed,
> >
> > I am not entirely sure what problem you are trying to solve.
> >
> > My interpretation is that Exchange does not allow you to query for items
> > that have been created/modified/deleted since the last sync.
>
> Correct.
>
> > Since that would mean there is no such thing as the concept "remote
> > revision", why would you want to store one?
> >
> > Without the option of just getting the updates you will have two set of
> > items, one from Akonadi and one from Exchange.
> >
> > Any item in E but not in A is new.
> > Any item in E and in A is potentially modified.
> > Any item not in E but in A has been deleted
>
> Also correct.
>
> > But as I said I think I don't understands the problem.
>
> The piece you are missing is the amount of data, and the speed with
> which it can be fetched. On my system, I can fetch about 500 items
> every 50 seconds, and there are about 500k items to fetch, so a full
> download takes ~50k seconds or about 14 hours. Both the number of
> items, and the download time mean that I cannot realistically do the
> usual thing of building two lists for E and A, and subtracting one
> from the other.
I see, that indeed changes things.
> Instead, I have this design in mind...
>
> 1. When I start fetching the collection, I will note the starting time
> using a collection attribute to persist the information (in case of
> needing to restart the machine).
You could alternatively use Collection::remoteRevision
> 2. I have an incremental fetch phase during which I fetch data in
> batches (of 500 items). After each batch, I "bookmark" where I got to.
> If I shutdown my machine, on restart, I resume the fetch using the
> bookmark.
>
> 3. When I get to the end (I've never actually managed to get to that
> point yet!), I hope to delete all the items with a creation date prior
> to the recorded start time.
>
> I hope that make sense? Anyway, it is the query for this last part
> that I am stuck on - or some other better idea!
The only alternative I can come up with is updating each processed Akonadi
item's remote revision so that all non updated once remain at the old one.
I.e.
foreach( item from exchange ) {
fetch akonadi item
depending on result either create/modify
use new timestamp as item remote revision
}
once full sync is done, fetch all akonadi items again and delete all which
still have the original timestamp.
Cheers,
Kevin
--
Kevin Krammer, KDE developer, xdg-utils developer
KDE user support, developer mentoring
["signature.asc" (application/pgp-signature)]
_______________________________________________
KDE PIM mailing list kde-pim@kde.org
https://mail.kde.org/mailman/listinfo/kde-pim
KDE PIM home page at http://pim.kde.org/
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic