[prev in list] [next in list] [prev in thread] [next in thread] 

List:       solr-user
Subject:    Re: losing records during solr updates
From:       Erick Erickson <erickerickson () gmail ! com>
Date:       2017-03-28 19:43:27
Message-ID: CAN4YXvdCUa_9az0C=yb9t2aSqcJUSFp0gw9qjgms8dEGa-=O8w () mail ! gmail ! com
[Download RAW message or body]

Shawn:

Two questions:

1> _how_ are you restarting a node? kill -9 is A Bad Thing for
instance. Use the 'bin/solr stop' option.
2> How are you indexing? If you're using SolrJ then a successful
response should indicate that the raw documents have been written to
_all_ active replica's tlogs, so changing the leader should be
seamless. On scenario here is that your indexing process is throwing
indexing failures on the floor instead of retrying.

Shots in the dark
Erick

On Mon, Mar 27, 2017 at 2:21 PM, Shawn Feldman <shawn.feldman@gmail.com> wrote:
> This update seems suspicious,  the adds with the same id seem like a
> closure issue in the retry.
> 
> -------
> solr1_1 | 2017-03-27 20:19:12.397 INFO (qtp575335780-17) [c:goseg s:shard24
> r:core_node12 x:goseg_shard24_replica2] o.a.s.u.p.LogUpdateProcessorFactory
> [goseg_shard24_replica2] webapp=/solr path=/update
> params={update.distrib=FROMLEADER&distrib.from=
> http://172.17.0.10:8983/solr/goseg_shard24_replica1/&min_rf=3&wt=javabin&version=2}{add=[dev_list_segmentation_test_76661_recipients!batch4-81@x.com
>  (1563055570139742208), dev_list_segmentation_test_76661_recipients!
> batch4-81@x.com (1563055570141839360),
> dev_list_segmentation_test_76661_recipients!batch4-81@x.com
> (1563055570141839361), dev_list_segmentation_test_76661_recipients!
> batch4-81@x.com (1563055570142887936),
> dev_list_segmentation_test_76661_recipients!batch4-81@x.com
> (1563055570143936512), dev_list_segmentation_test_76661_recipients!
> batch4-81@x.com (1563055570143936513),
> dev_list_segmentation_test_76661_recipients!batch4-81@x.com
> (1563055570143936514), dev_list_segmentation_test_76661_recipients!
> batch4-81@x.com (1563055570143936515),
> dev_list_segmentation_test_76661_recipients!batch4-81@x.com
> (1563055570144985088), dev_list_segmentation_test_76661_recipients!
> batch4-81@x.com (1563055570144985089)]} 0 23
> 
> 
> 
> On Mon, Mar 27, 2017 at 3:04 PM Shawn Feldman <shawn.feldman@gmail.com>
> wrote:
> 
> > Here is the solr log of our test node restarting
> > 
> > https://s3.amazonaws.com/uploads.hipchat.com/17705/1138911/fvKS3t5uAnoi0pP/solrlog.txt
> >  
> > 
> > 
> > On Mon, Mar 27, 2017 at 2:10 PM Shawn Feldman <shawn.feldman@gmail.com>
> > wrote:
> > 
> > Ercan, I think you responded to the wrong thread
> > 
> > On Mon, Mar 27, 2017 at 2:02 PM Ercan Karadeniz <
> > ercan_karadeniz@hotmail.com> wrote:
> > 
> > 6.4.2 (latest available) or shall I use another one for familiarization
> > purposes?
> > 
> > 
> > ________________________________
> > Von: Alexandre Rafalovitch <arafalov@gmail.com>
> > Gesendet: Montag, 27. März 2017 21:28
> > An: solr-user
> > Betreff: Re: losing records during solr updates
> > 
> > What version of Solr is it?
> > 
> > Regards,
> > Alex.
> > ----
> > http://www.solr-start.com/ - Resources for Solr users, new and experienced
> > Home | Solr Start<http://www.solr-start.com/>
> > www.solr-start.com
> > Welcome to the collection of resources to make Apache Solr search engine
> > more comprehensible to beginner and intermediate users. While Solr is very
> > easy to start with ...
> > 
> > 
> > 
> > 
> > 
> > On 27 March 2017 at 15:25, Shawn Feldman <shawn.feldman@gmail.com> wrote:
> > > When we restart solr on a leader node while we are doing updates, we've
> > > noticed that some small percentage of data is lost.  maybe 9 records out
> > of
> > > 1k.  Updating using min_rf=3 or full quorum seems to resolve this since
> > our
> > > rf = 3.  Updates then seem to only succeed when all nodes are back up.
> > Why
> > > would we see record loss during a node restart?  I assumed the
> > transaction
> > > log would get replayed.  We have a 4 node cluster with 24 shards.
> > > 
> > > -shawn
> > 
> > 


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic