[prev in list] [next in list] [prev in thread] [next in thread] 

List:       solr-user
Subject:    RE: Question about autoAddReplicas
From:       "Tseng, Danny" <dtseng () informatica ! com>
Date:       2017-03-31 6:19:59
Message-ID: DM5PR03MB27965C60446C1271264105A1D5370 () DM5PR03MB2796 ! namprd03 ! prod ! outlook ! com
[Download RAW message or body]


More details about the error...

State.json:

{"collection1":{
    "replicationFactor":"1",
    "shards":{
      "shard1":{
        "range":"80000000-ffffffff",
        "state":"active",
        "replicas":{"core_node1":{
            "core":"collection1_shard1_replica1",
            "dataDir":"hdfs://psvrlxcdh5mmdev1.somewhere.com:8020/Test/LDM/=
psvrlxbdecdh1Cluster/solr/collection1/core_node1/data/",
            "base_url":"http://psvrlxcdh5mmdev3.somewhere.com:48193/solr",
            "node_name":"psvrlxcdh5mmdev3.somewhere.com:48193_solr",
            "state":"active",
            "ulogDir":"hdfs://psvrlxcdh5mmdev1.somewhere.com:8020/Test/LDM/=
psvrlxbdecdh1Cluster/solr/collection1/core_node1/data/tlog",
            "leader":"true"}}},
      "shard2":{
        "range":"0-7fffffff",
        "state":"active",
        "replicas":{"core_node2":{
            "core":"collection1_shard2_replica1",
            "base_url":"http://psvrlxcdh5mmdev3.somewhere.com:48193/solr",
            "node_name":"psvrlxcdh5mmdev3.somewhere.com:48193_solr",
            "state":"down",
            "leader":"true"}}}},
    "router":{
      "field":"_root_uid_",
      "name":"compositeId"},
    "maxShardsPerNode":"2",
    "autoAddReplicas":"true"}}


Solr.log
ERROR - 2017-03-31 06:00:54.382; [c:collection1 s:shard2 r:core_node2 x:col=
lection1_shard2_replica1] org.apache.solr.core.CoreContainer; Error creatin=
g core [collection1_shard2_replica1]: Index dir 'hdfs://psvrlxcdh5mmdev1.so=
mewhere.com:8020/Test/LDM/psvrlxbdecdh1Cluster/solr/collection1/core_node2/=
data/index/' of core 'collection1_shard2_replica1' is already locked. The m=
ost likely cause is another Solr server (or another solr core in this serve=
r) also configured to use this directory; other possible causes may be spec=
ific to lockType: hdfs
org.apache.solr.common.SolrException: Index dir 'hdfs://psvrlxcdh5mmdev1.so=
mewhere.com:8020/Test/LDM/psvrlxbdecdh1Cluster/solr/collection1/core_node2/=
data/index/' of core 'collection1_shard2_replica1' is already locked. The m=
ost likely cause is another Solr server (or another solr core in this serve=
r) also configured to use this directory; other possible causes may be spec=
ific to lockType: hdfs
               at org.apache.solr.core.SolrCore.<init>(SolrCore.java:903)
               at org.apache.solr.core.SolrCore.<init>(SolrCore.java:776)
               at org.apache.solr.core.CoreContainer.create(CoreContainer.j=
ava:842)
               at org.apache.solr.core.CoreContainer.create(CoreContainer.j=
ava:779)
               at org.apache.solr.handler.admin.CoreAdminOperation.lambda$s=
tatic$0(CoreAdminOperation.java:88)
               at org.apache.solr.handler.admin.CoreAdminOperation.execute(=
CoreAdminOperation.java:377)
               at org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.c=
all(CoreAdminHandler.java:365)
               at org.apache.solr.handler.admin.CoreAdminHandler.handleRequ=
estBody(CoreAdminHandler.java:156)
               at org.apache.solr.handler.RequestHandlerBase.handleRequest(=
RequestHandlerBase.java:153)
               at org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(H=
ttpSolrCall.java:660)
               at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.ja=
va:441)
               at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrD=
ispatchFilter.java:303)
               at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrD=
ispatchFilter.java:254)
               at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFi=
lter(ServletHandler.java:1668)
               at org.eclipse.jetty.servlet.ServletHandler.doHandle(Servlet=
Handler.java:581)
               at org.eclipse.jetty.server.handler.ScopedHandler.handle(Sco=
pedHandler.java:143)
               at org.eclipse.jetty.security.SecurityHandler.handle(Securit=
yHandler.java:548)
               at org.eclipse.jetty.server.session.SessionHandler.doHandle(=
SessionHandler.java:226)
               at org.eclipse.jetty.server.handler.ContextHandler.doHandle(=
ContextHandler.java:1160)
               at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletH=
andler.java:511)
               at org.eclipse.jetty.server.session.SessionHandler.doScope(S=
essionHandler.java:185)
               at org.eclipse.jetty.server.handler.ContextHandler.doScope(C=
ontextHandler.java:1092)
               at org.eclipse.jetty.server.handler.ScopedHandler.handle(Sco=
pedHandler.java:141)
               at org.eclipse.jetty.server.handler.ContextHandlerCollection=
.handle(ContextHandlerCollection.java:213)
               at org.eclipse.jetty.server.handler.HandlerCollection.handle=
(HandlerCollection.java:119)
               at org.eclipse.jetty.server.handler.HandlerWrapper.handle(Ha=
ndlerWrapper.java:134)
               at org.eclipse.jetty.server.Server.handle(Server.java:518)
               at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.j=
ava:308)
               at org.eclipse.jetty.server.HttpConnection.onFillable(HttpCo=
nnection.java:244)
               at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succ=
eeded(AbstractConnection.java:273)
               at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.j=
ava:95)
               at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectCh=
annelEndPoint.java:93)
               at org.eclipse.jetty.util.thread.strategy.ExecuteProduceCons=
ume.produceAndRun(ExecuteProduceConsume.java:246)
               at org.eclipse.jetty.util.thread.strategy.ExecuteProduceCons=
ume.run(ExecuteProduceConsume.java:156)
               at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(Que=
uedThreadPool.java:654)
               at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(Queu=
edThreadPool.java:572)
               at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.lucene.store.LockObtainFailedException: Index dir 'hd=
fs://psvrlxcdh5mmdev1.somewhere.com:8020/Test/LDM/psvrlxbdecdh1Cluster/solr=
/collection1/core_node2/data/index/' of core 'collection1_shard2_replica1' =
is already locked. The most likely cause is another Solr server (or another=
 solr core in this server) also configured to use this directory; other pos=
sible causes may be specific to lockType: hdfs
               at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:658=
)
               at org.apache.solr.core.SolrCore.<init>(SolrCore.java:850)
               ... 36 more


From: Tseng, Danny [mailto:dtseng@informatica.com]
Sent: Thursday, March 30, 2017 9:35 PM
To: solr-user@lucene.apache.org
Subject: Question about autoAddReplicas

Hi,

I create a collection of 2 shards with 1 replication factor and enable auto=
AddReplicas. Then I kill shard2 with 'kill -9' . The overseer asked the oth=
er solr node to create a new solr core and point to the dataDir of shard2. =
Unfortunately, the new core failed to come up because of pre-existing write=
 lock. This is the new solr cluster state after fail over. Notice that shar=
d2 doesn't have dataDir assigned. Am I missing something?

[cid:image001.png@01D2A99B.712BB300]




[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic