--===============1983626789== Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="=-GJT6iVSDZaRfVlBgZJSu" --=-GJT6iVSDZaRfVlBgZJSu Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable El lun, 07-06-2004 a las 18:11, Gustavo Sverzut Barbieri escribi=C3=B3: > > Creating the index is the problem. A backend process should be constant= ly > > running at a low priority. Initially, it indexes all the files. Then, i= t > > begins to index files as they change. It always keep a fresh index of t= he > > most important files, and gets around to less important files when it h= as > > the time. >=20 > That doesn't work. > - a process accessing the disk everytime will screw up with OS disc cach= ing. On the contrary. An indexing daemon which automatically indexes cached files would work much faster since the worked-on file is, with 99% probability, already in the disk cache. Plus, new functionality certainly needs to have resources allocated. As this is free software, you can always turn it off, but I'm certain most users would welcome such an advance, even if it screws which disk caching, which, as I already laid out before, it won't. > - IMHO, one may want to find the last file he/she accessed/saved/created= =20 > other than anything else. So these will be the most important... and prob= ably=20 > will not be indexed. who said so? That's the exact point of an index which would update itself when files are modified (perhaps one or two seconds later). One or two seconds for updates doesn't seem that slow to me. > ...> the database part of the tool. The requirements for the index cache = are > > very different than what PostgreSQL provides. I do believe that we can > > steal a lot of ideas from the database community on how to search vast > > indexes efficiently. Perhaps early implementations will rely on a datab= ase, > > but that should be temporary. >=20 > I also don't think so. Well, Jonathan, you might not think so, but it's the fastest route to a rich metadata desktop system. You need an index (for which you need a database), you need a file alteration monitor, and you also need a system which performs live queries to the index. It's also exactly what Microsoft is doing with Longhorn. Only I'd like to see that the metadata is stored directly with the file as EAs, and indexed, while Microsoft is putting metadata in the Yukon database (a direction which will probably make most backup tools obsolete). --=20 Manuel Amador Jefe de I+D +593 (9) 847-7372 Amauta http://www.amautacorp.com/ GNU Privacy Guard key ID: 0xC1033CAD at keyserver.net --=-GJT6iVSDZaRfVlBgZJSu Content-Type: application/pgp-signature; name=signature.asc Content-Description: Esta parte del mensaje =?ISO-8859-1?Q?est=E1?= firmada digitalmente -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.3 (GNU/Linux) iD8DBQBAx1vwWyznNMEDPK0RAngRAJ9aEgtNbakTt99A/KSuLqmQhrjUFgCfUgtj wZHmd7wDQVkBuTSAYjydhtY= =GLje -----END PGP SIGNATURE----- --=-GJT6iVSDZaRfVlBgZJSu-- --===============1983626789== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ kde-usability mailing list kde-usability@kde.org https://mail.kde.org/mailman/listinfo/kde-usability --===============1983626789==--