------=_Part_108408_31461800.1187709816124 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Content-Disposition: inline On 8/21/07, Dirk Mueller wrote: > > On Tuesday, 21. August 2007, Flavio Castelli wrote: > > > We watch recursively all indexed directories. In this way Strigi's index > > will be up-to-date and user searches will be consistent (we won't return > > deleted or no more valid contents nor we will omit a valid one). > > As I tried to express previously: thats an extremely stupid idea. strigi > should not interfer while my "svn update" touches 15000 files. it can do > that > when I`m away and do not care about my computer, but it shouldn`t do that > while I`m sitting in front of the machine and wait for something to > finish. > > I`ve tried to talk about this with Jos already and it seems we have > conflicting goals here. However, from experience with beagle I know that > the > number 1 complain isn`t that it doesn`t find not-yet indexed documents, > but > that it drains system ressources like crazy. Why would that be so bad? Of course, it would hurt if you would start re-indexing as soon as something changes. But if you collect changes and do it in batches, it wouldn't be so bad, and things would be up-to-date. I'm sure it can be smart enough to be unintrusive yet powerfull. Dirk > ------=_Part_108408_31461800.1187709816124 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 7bit Content-Disposition: inline On 8/21/07, Dirk Mueller <mueller@kde.org> wrote:
On Tuesday, 21. August 2007, Flavio Castelli wrote:

> We watch recursively all indexed directories. In this way Strigi's index
> will be up-to-date and user searches will be consistent (we won't return
> deleted or no more valid contents nor we will omit a valid one).

As I tried to express previously: thats an extremely stupid idea. strigi
should not interfer while my "svn update" touches 15000 files. it can do that
when I`m away and do not care about my computer, but it shouldn`t do that
while I`m sitting in front of the machine and wait for something to finish.

I`ve tried to talk about this with Jos already and it seems we have
conflicting goals here. However, from experience with beagle I know that the
number 1 complain isn`t that it doesn`t find not-yet indexed documents, but
that it drains system ressources like crazy.

Why would that be so bad? Of course, it would hurt if you would start re-indexing as soon as something changes. But if you collect changes and do it in batches, it wouldn't be so bad, and things would be up-to-date. I'm sure it can be smart enough to be unintrusive yet powerfull.

Dirk

------=_Part_108408_31461800.1187709816124--