[prev in list] [next in list] [prev in thread] [next in thread] 

List:       lucene-user
Subject:    Re: setMaxClauseCount ??
From:       Doug Cutting <cutting () apache ! org>
Date:       2004-01-21 16:52:28
Message-ID: 400EAE4C.7070901 () apache ! org
[Download RAW message or body]

Karl Koch wrote:
> Do you know good papers about strategies of how
> to select keywords effectivly beyond the scope of stopword lists and stemming?
> 
> Using term frequencies of the document is not really possible since lucene
> is not providing access to a document vector, isn't it?

Lucene does let you access the document frequency of terms, with 
IndexReader.docFreq().  Term frequencies can be computed by 
re-tokenizing the text, which, for a single document, is usually fast 
enough.  But looking up the docFreq() of every term in the document is 
probably too slow.

You can use some heuristics to prune the set of terms, to avoid calling 
docFreq() too much, or at all.  Since you're trying to maximize a tf*idf 
score, you're probably most interested in terms with a high tf. 
Choosing a tf threshold even as low as two or three will radically 
reduce the number of terms under consideration.  Another heuristic is 
that terms with a high idf (i.e., a low df) tend to be longer.  So you 
could threshold the terms by the number of characters, not selecting 
anything less than, e.g., six or seven characters.  With these sorts of 
heuristics you can usually find small set of, e.g., ten or fewer terms 
that do a pretty good job of characterizing a document.

It all depends on what you're trying to do.  If you're trying to eek out 
that last percent of precision and recall regardless of computational 
difficulty so that you can win a TREC competition, then the techniques I 
mention above are useless.  But if you're trying to provide a "more like 
this" button on a search results page that does a decent job and has 
good performance, such techniques might be useful.

An efficient, effective "more-like-this" query generator would be a 
great contribution, if anyone's interested.  I'd imagine that it would 
take a Reader or a String (the document's text), an Analyzer, and return 
a set of representative terms using heuristics like those above.  The 
frequency and length thresholds could be parameters, etc.

Doug


---------------------------------------------------------------------
To unsubscribe, e-mail: lucene-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: lucene-user-help@jakarta.apache.org

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic