[prev in list] [next in list] [prev in thread] [next in thread] 

List:       lucene-dev
Subject:    [jira] Issue Comment Edited: (LUCENE-1195) Performance improvement
From:       "Michael Busch (JIRA)" <jira () apache ! org>
Date:       2008-02-27 1:04:53
Message-ID: 947429919.1204074293130.JavaMail.jira () brutus
[Download RAW message or body]


    [ https://issues.apache.org/jira/browse/LUCENE-1195?page=com.atlassian.jira.plugin \
.system.issuetabpanels:comment-tabpanel&focusedCommentId=12572750#action_12572750 ] 

michaelbusch edited comment on LUCENE-1195 at 2/26/08 5:04 PM:
----------------------------------------------------------------

Test details:
The index has 500,000 docs and 3191625 unique terms. To construct the queries 
I used terms with 1000<df<3000, the index has 3880 of them. I combined the 
terms randomly. Each query has at least one hit. The AND queries have 25 hits 
on average, the OR queries 5641. 

The LRU cache was pretty small with a size of just 20.

The index is unoptimized with 11 segments.

The searcher was warmed for the tests, thus benefiting from FS caching, which
should be a common scenario for indexes of such a medium size.

      was (Author: michaelbusch):
    Test details:
The index has 500,000 docs and 3191625 unique terms. To construct the queries 
I used terms with 1000<df<3000, the index has 3880 of them. I combined the 
terms randomly. Each query has at least one hit. The AND queries have 25 hits 
on average, the OR queries 5641. 

The LRU cache was pretty small with a size of just 20.

The index is unoptimized with 11 segments.
  
> Performance improvement for TermInfosReader
> -------------------------------------------
> 
> Key: LUCENE-1195
> URL: https://issues.apache.org/jira/browse/LUCENE-1195
> Project: Lucene - Java
> Issue Type: Improvement
> Components: Index
> Reporter: Michael Busch
> Assignee: Michael Busch
> Priority: Minor
> Fix For: 2.4
> 
> 
> Currently we have a bottleneck for multi-term queries: the dictionary lookup is \
> being done twice for each term. The first time in Similarity.idf(), where \
> searcher.docFreq() is called. The second time when the posting list is opened \
> (TermDocs or TermPositions). The dictionary lookup is not cheap, that's why a \
> significant performance improvement is possible here if we avoid the second lookup. \
> An easy way to do this is to add a small LRU  cache to TermInfosReader. 
> I ran some performance experiments with an LRU cache size of 20, and an mid-size \
> index of 500,000 documents from wikipedia. Here are some test results:
> 50,000 AND queries with 3 terms each:
> old:                  152 secs
> new (with LRU cache): 112 secs (26% faster)
> 50,000 OR queries with 3 terms each:
> old:                  175 secs
> new (with LRU cache): 133 secs (24% faster)
> For bigger indexes this patch will probably have less impact, for smaller once \
> more. I will attach a patch soon.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic