[prev in list] [next in list] [prev in thread] [next in thread]
List: cassandra-user
Subject: Re: Unreachable node, not in nodetool ring
From: Olivier Mallassi <omallassi () octo ! com>
Date: 2012-07-27 9:36:08
Message-ID: CAMGF0rGq85B1Mwjq+fkXDvndeFV_Qwb_8DPOLvkM2L9FxyrZbQ () mail ! gmail ! com
[Download RAW message or body]
nope
my last ideas would be (and I am not sure these are the best....)
- try removetoken with -f option. I do not believe it will change anything
but...
- try nodeltool ring on ALL nodes and check all nodes see the unreachable
node. If not, you could maybe juste decommission the one(s) that see the
unreachable node.
- If you are in test, you can delete the system folder (subfolder of where
all your data are saved (data_directory in cassandra.yaml, by default
/var/lib/cassandra/data).
*but you will lose everything*....
- snapshot data and restore them in another cluster. not that simple
depending on data volume, traffic etc....
From my side, I do not have more ideas...and once again, I am not the sure
these ones are the best ;)
I do not know if cassandra is able to definitively consider a node as dead
after a certain amount of time.
On Fri, Jul 27, 2012 at 11:04 AM, Alain RODRIGUEZ <arodrime@gmail.com>wrote:
> Hi again,
>
> Nobody has a clue about this issue ?
>
> I'm still facing this problem.
>
> Alain
>
> 2012/7/23 Alain RODRIGUEZ <arodrime@gmail.com>:
> > Does anyone knows how to totally remove a dead node that only appears
> > when doing a "describe cluster" from the cli ?
> >
> > I still got this issue in my production cluster.
> >
> > Alain
> >
> > 2012/7/20 Alain RODRIGUEZ <arodrime@gmail.com>:
> >> Hi Aaron,
> >>
> >> I have repaired and cleanup both nodes already and I did it after any
> >> change on my ring (It tooks me a while btw :)).
> >>
> >> The node *.211 is actually out of the ring and out of my control
> >> 'cause I don't have the server anymore (EC2 instance terminated a few
> >> days ago).
> >>
> >> Alain
> >>
> >> 2012/7/20 aaron morton <aaron@thelastpickle.com>:
> >>> I would:
> >>>
> >>> * run repair on 10.58.83.109
> >>> * run cleanup on 10.59.21.241 (I assume this was the first node).
> >>>
> >>> It looks like 0.56.62.211 is out of the cluster.
> >>>
> >>> Cheers
> >>>
> >>> -----------------
> >>> Aaron Morton
> >>> Freelance Developer
> >>> @aaronmorton
> >>> http://www.thelastpickle.com
> >>>
> >>> On 19/07/2012, at 9:37 PM, Alain RODRIGUEZ wrote:
> >>>
> >>> Not sure if this may help :
> >>>
> >>> nodetool -h localhost gossipinfo
> >>> /10.58.83.109
> >>> RELEASE_VERSION:1.1.2
> >>> RACK:1b
> >>> LOAD:5.9384978406E10
> >>> SCHEMA:e7e0ec6c-616e-32e7-ae29-40eae2b82ca8
> >>> DC:eu-west
> >>> STATUS:NORMAL,85070591730234615865843651857942052864
> >>> RPC_ADDRESS:0.0.0.0
> >>> /10.248.10.94
> >>> RELEASE_VERSION:1.1.2
> >>> LOAD:3.0128207422E10
> >>> SCHEMA:e7e0ec6c-616e-32e7-ae29-40eae2b82ca8
> >>> STATUS:LEFT,0,1342866804032
> >>> RPC_ADDRESS:0.0.0.0
> >>> /10.56.62.211
> >>> RELEASE_VERSION:1.1.2
> >>> LOAD:11594.0
> >>> RACK:1b
> >>> SCHEMA:59adb24e-f3cd-3e02-97f0-5b395827453f
> >>> DC:eu-west
> >>> REMOVAL_COORDINATOR:REMOVER,85070591730234615865843651857942052864
> >>> STATUS:removed,170141183460469231731687303715884105727,1342453967415
> >>> RPC_ADDRESS:0.0.0.0
> >>> /10.59.21.241
> >>> RELEASE_VERSION:1.1.2
> >>> RACK:1b
> >>> LOAD:1.08667047094E11
> >>> SCHEMA:e7e0ec6c-616e-32e7-ae29-40eae2b82ca8
> >>> DC:eu-west
> >>> STATUS:NORMAL,0
> >>> RPC_ADDRESS:0.0.0.0
> >>>
> >>> Story :
> >>>
> >>> I had 2 node cluster
> >>>
> >>> 10.248.10.94 Token 0
> >>> 10.59.21.241 Token 85070591730234615865843651857942052864
> >>>
> >>> Had to replace node 10.248.10.94 so I add 10.56.62.211 on token 0 - 1
> >>> (170141183460469231731687303715884105727). This failed, I removed
> >>> token.
> >>>
> >>> I repeat the previous operation with the node 10.59.21.241 and it went
> >>> fine. Next I decommissionned the node 10.248.10.94 and moved
> >>> 10.59.21.241 to the token 0.
> >>>
> >>> Now I am on the situation described before.
> >>>
> >>> Alain
> >>>
> >>>
> >>> 2012/7/19 Alain RODRIGUEZ <arodrime@gmail.com>:
> >>>
> >>> Hi, I wasn't able to see the token used currently by the 10.56.62.211
> >>>
> >>> (ghost node).
> >>>
> >>>
> >>> I already removed the token 6 days ago :
> >>>
> >>>
> >>> -> "Removing token 170141183460469231731687303715884105727 for
> >>> /10.56.62.211"
> >>>
> >>>
> >>> "- check in cassandra log. It is possible you see a log line telling
> >>>
> >>> you 10.56.62.211 and 10.59.21.241 o 10.58.83.109 share the same
> >>>
> >>> token"
> >>>
> >>>
> >>> Nothing like that in the logs
> >>>
> >>>
> >>> I tried the following without success :
> >>>
> >>>
> >>> $ nodetool -h localhost removetoken
> 170141183460469231731687303715884105727
> >>>
> >>> Exception in thread "main" java.lang.UnsupportedOperationException:
> >>>
> >>> Token not found.
> >>>
> >>> ...
> >>>
> >>>
> >>> I really thought this was going to work :-).
> >>>
> >>>
> >>> Any other ideas ?
> >>>
> >>>
> >>> Alain
> >>>
> >>>
> >>> PS : I heard that Octo is a nice company and you use Cassandra so I
> >>>
> >>> guess you're fine in there :-). I wish you the best thanks for your
> >>>
> >>> help.
> >>>
> >>>
> >>> 2012/7/19 Olivier Mallassi <omallassi@octo.com>:
> >>>
> >>> I got that a couple of time (due to DNS issues in our infra)
> >>>
> >>>
> >>> what you could try
> >>>
> >>> - check in cassandra log. It is possible you see a log line telling you
> >>>
> >>> 10.56.62.211 and 10.59.21.241 o 10.58.83.109 share the same token
> >>>
> >>> - if 10.56.62.211 is up, try decommission (via nodetool)
> >>>
> >>> - if not, move 10.59.21.241 or 10.58.83.109 to current token + 1
> >>>
> >>> - use removetoken (via nodetool) to remove the token associated with
> >>>
> >>> 10.56.62.211. in case of failure, you can use removetoken -f instead.
> >>>
> >>>
> >>> then, the unreachable IP should have disappeared.
> >>>
> >>>
> >>>
> >>> HTH
> >>>
> >>>
> >>> On Thu, Jul 19, 2012 at 10:38 AM, Alain RODRIGUEZ <arodrime@gmail.com>
> >>>
> >>> wrote:
> >>>
> >>>
> >>> Hi,
> >>>
> >>>
> >>> I tried to add a node a few days ago and it failed. I finally made it
> >>>
> >>> work with an other node but now when I describe cluster on cli I got
> >>>
> >>> this :
> >>>
> >>>
> >>> Cluster Information:
> >>>
> >>> Snitch: org.apache.cassandra.locator.Ec2Snitch
> >>>
> >>> Partitioner: org.apache.cassandra.dht.RandomPartitioner
> >>>
> >>> Schema versions:
> >>>
> >>> UNREACHABLE: [10.56.62.211]
> >>>
> >>> e7e0ec6c-616e-32e7-ae29-40eae2b82ca8: [10.59.21.241, 10.58.83.109]
> >>>
> >>>
> >>> And nodetool ring gives me :
> >>>
> >>>
> >>> Address DC Rack Status State Load
> >>>
> >>> Owns Token
> >>>
> >>>
> >>> 85070591730234615865843651857942052864
> >>>
> >>> 10.59.21.241 eu-west 1b Up Normal 101.17 GB
> >>>
> >>> 50.00% 0
> >>>
> >>> 10.58.83.109 eu-west 1b Up Normal 55.27 GB
> >>>
> >>> 50.00% 85070591730234615865843651857942052864
> >>>
> >>>
> >>> The point, as you can see, is that one of my node has twice the
> >>>
> >>> information of the second one. I have a RF = 2 defined.
> >>>
> >>>
> >>> My guess is that the token 0 node keep data for the unreachable node.
> >>>
> >>>
> >>> The IP of the unreachable node doesn't belong to me anymore, I have no
> >>>
> >>> access to this ghost node.
> >>>
> >>>
> >>> Does someone know how to completely remove this ghost node from my
> cluster
> >>>
> >>> ?
> >>>
> >>>
> >>> Thank you.
> >>>
> >>>
> >>> Alain
> >>>
> >>>
> >>> INFO :
> >>>
> >>>
> >>> On ubuntu (AMI Datastax 2.1 and 2.2)
> >>>
> >>> Cassandra 1.1.2 (upgraded from 1.0.9)
> >>>
> >>> 2 node cluster (+ the ghost one)
> >>>
> >>> RF = 2
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> --
> >>>
> >>> ............................................................
> >>>
> >>> Olivier Mallassi
> >>>
> >>> OCTO Technology
> >>>
> >>> ............................................................
> >>>
> >>> 50, Avenue des Champs-Elysées
> >>>
> >>> 75008 Paris
> >>>
> >>>
> >>> Mobile: (33) 6 28 70 26 61
> >>>
> >>> Tél: (33) 1 58 56 10 00
> >>>
> >>> Fax: (33) 1 58 56 10 01
> >>>
> >>>
> >>> http://www.octo.com
> >>>
> >>> Octo Talks! http://blog.octo.com
> >>>
> >>>
> >>>
> >>>
>
--
............................................................
Olivier Mallassi
OCTO Technology
............................................................
50, Avenue des Champs-Elysées
75008 Paris
Mobile: (33) 6 28 70 26 61
Tél: (33) 1 58 56 10 00
Fax: (33) 1 58 56 10 01
http://www.octo.com
Octo Talks! http://blog.octo.com
[Attachment #3 (text/html)]
<font><font face="arial,helvetica,sans-serif">nope</font></font><div><font><font \
face="arial,helvetica,sans-serif">my last ideas would be (and I am not sure these are \
the best....)</font></font></div><div><span \
style="font-family:arial,helvetica,sans-serif">- try removetoken with -f option. I do \
not believe it will change anything but...</span></div>
<div><span style="font-family:arial,helvetica,sans-serif">- try nodeltool ring on ALL \
nodes and check all nodes see the unreachable node. If not, you could maybe juste \
decommission the one(s) that see the unreachable node. </span></div>
<div><font><font face="arial,helvetica,sans-serif">- If you are in test, you can \
delete the system folder (subfolder of where all your data are saved (data_directory \
in cassandra.yaml, by default </font></font><span \
style="font-size:13px;font-family:Arial,Helvetica,sans-serif">/var/lib/cassandra/data</span><span \
style="font-family:arial,helvetica,sans-serif">). </span></div>
<div><font><font face="arial,helvetica,sans-serif"><b>but you will lose \
everything</b>....</font></font></div> <div><font><font \
face="arial,helvetica,sans-serif">- snapshot data and restore them in another \
cluster. not that simple depending on data volume, traffic \
etc....</font></font></div><div><font><font face="arial,helvetica,sans-serif"><br>
</font></font></div><div><font><font face="arial,helvetica,sans-serif">From my side, \
I do not have more ideas...and once again, I am not the sure these ones are the best \
;) </font></font></div><div><font><font face="arial,helvetica,sans-serif"><br> \
</font></font></div><div><font><font face="arial,helvetica,sans-serif">I do not know \
if cassandra is able to definitively consider a node as dead after a certain amount \
of time. </font></font></div> <div><font><font \
face="arial,helvetica,sans-serif"><br></font></font><br><div class="gmail_quote"> On \
Fri, Jul 27, 2012 at 11:04 AM, Alain RODRIGUEZ <span dir="ltr"><<a \
href="mailto:arodrime@gmail.com" target="_blank">arodrime@gmail.com</a>></span> \
wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px \
#ccc solid;padding-left:1ex">
Hi again,<br>
<br>
Nobody has a clue about this issue ?<br>
<br>
I'm still facing this problem.<br>
<br>
Alain<br>
<br>
2012/7/23 Alain RODRIGUEZ <<a href="mailto:arodrime@gmail.com" \
target="_blank">arodrime@gmail.com</a>>:<br> <div><div>> Does anyone knows how \
to totally remove a dead node that only appears<br> > when doing a "describe \
cluster" from the cli ?<br> ><br>
> I still got this issue in my production cluster.<br>
><br>
> Alain<br>
><br>
> 2012/7/20 Alain RODRIGUEZ <<a href="mailto:arodrime@gmail.com" \
target="_blank">arodrime@gmail.com</a>>:<br> >> Hi Aaron,<br>
>><br>
>> I have repaired and cleanup both nodes already and I did it after any<br>
>> change on my ring (It tooks me a while btw :)).<br>
>><br>
>> The node *.211 is actually out of the ring and out of my control<br>
>> 'cause I don't have the server anymore (EC2 instance terminated a \
few<br> >> days ago).<br>
>><br>
>> Alain<br>
>><br>
>> 2012/7/20 aaron morton <<a href="mailto:aaron@thelastpickle.com" \
target="_blank">aaron@thelastpickle.com</a>>:<br> >>> I would:<br>
>>><br>
>>> * run repair on 10.58.83.109<br>
>>> * run cleanup on 10.59.21.241 (I assume this was the first node).<br>
>>><br>
>>> It looks like 0.56.62.211 is out of the cluster.<br>
>>><br>
>>> Cheers<br>
>>><br>
>>> -----------------<br>
>>> Aaron Morton<br>
>>> Freelance Developer<br>
>>> @aaronmorton<br>
>>> <a href="http://www.thelastpickle.com" \
target="_blank">http://www.thelastpickle.com</a><br> >>><br>
>>> On 19/07/2012, at 9:37 PM, Alain RODRIGUEZ wrote:<br>
>>><br>
>>> Not sure if this may help :<br>
>>><br>
>>> nodetool -h localhost gossipinfo<br>
>>> /<a href="http://10.58.83.109" target="_blank">10.58.83.109</a><br>
>>> RELEASE_VERSION:1.1.2<br>
>>> RACK:1b<br>
>>> LOAD:5.9384978406E10<br>
>>> SCHEMA:e7e0ec6c-616e-32e7-ae29-40eae2b82ca8<br>
>>> DC:eu-west<br>
>>> STATUS:NORMAL,85070591730234615865843651857942052864<br>
>>> RPC_ADDRESS:0.0.0.0<br>
>>> /<a href="http://10.248.10.94" target="_blank">10.248.10.94</a><br>
>>> RELEASE_VERSION:1.1.2<br>
>>> LOAD:3.0128207422E10<br>
>>> SCHEMA:e7e0ec6c-616e-32e7-ae29-40eae2b82ca8<br>
>>> STATUS:LEFT,0,1342866804032<br>
>>> RPC_ADDRESS:0.0.0.0<br>
>>> /<a href="http://10.56.62.211" target="_blank">10.56.62.211</a><br>
>>> RELEASE_VERSION:1.1.2<br>
>>> LOAD:11594.0<br>
>>> RACK:1b<br>
>>> SCHEMA:59adb24e-f3cd-3e02-97f0-5b395827453f<br>
>>> DC:eu-west<br>
>>> REMOVAL_COORDINATOR:REMOVER,85070591730234615865843651857942052864<br>
>>> STATUS:removed,170141183460469231731687303715884105727,1342453967415<br>
>>> RPC_ADDRESS:0.0.0.0<br>
>>> /<a href="http://10.59.21.241" target="_blank">10.59.21.241</a><br>
>>> RELEASE_VERSION:1.1.2<br>
>>> RACK:1b<br>
>>> LOAD:1.08667047094E11<br>
>>> SCHEMA:e7e0ec6c-616e-32e7-ae29-40eae2b82ca8<br>
>>> DC:eu-west<br>
>>> STATUS:NORMAL,0<br>
>>> RPC_ADDRESS:0.0.0.0<br>
>>><br>
>>> Story :<br>
>>><br>
>>> I had 2 node cluster<br>
>>><br>
>>> 10.248.10.94 Token 0<br>
>>> 10.59.21.241 Token 85070591730234615865843651857942052864<br>
>>><br>
>>> Had to replace node 10.248.10.94 so I add 10.56.62.211 on token 0 - \
1<br> >>> (170141183460469231731687303715884105727). This failed, I \
removed<br> >>> token.<br>
>>><br>
>>> I repeat the previous operation with the node 10.59.21.241 and it \
went<br> >>> fine. Next I decommissionned the node 10.248.10.94 and \
moved<br> >>> 10.59.21.241 to the token 0.<br>
>>><br>
>>> Now I am on the situation described before.<br>
>>><br>
>>> Alain<br>
>>><br>
>>><br>
>>> 2012/7/19 Alain RODRIGUEZ <<a href="mailto:arodrime@gmail.com" \
target="_blank">arodrime@gmail.com</a>>:<br> >>><br>
>>> Hi, I wasn't able to see the token used currently by the \
10.56.62.211<br> >>><br>
>>> (ghost node).<br>
>>><br>
>>><br>
>>> I already removed the token 6 days ago :<br>
>>><br>
>>><br>
>>> -> "Removing token 170141183460469231731687303715884105727 \
for<br> >>> /<a href="http://10.56.62.211" \
target="_blank">10.56.62.211</a>"<br> >>><br>
>>><br>
>>> "- check in cassandra log. It is possible you see a log line \
telling<br> >>><br>
>>> you 10.56.62.211 and 10.59.21.241 o 10.58.83.109 share the same<br>
>>><br>
>>> token"<br>
>>><br>
>>><br>
>>> Nothing like that in the logs<br>
>>><br>
>>><br>
>>> I tried the following without success :<br>
>>><br>
>>><br>
>>> $ nodetool -h localhost removetoken \
170141183460469231731687303715884105727<br> >>><br>
>>> Exception in thread "main" \
java.lang.UnsupportedOperationException:<br> >>><br>
>>> Token not found.<br>
>>><br>
>>> ...<br>
>>><br>
>>><br>
>>> I really thought this was going to work :-).<br>
>>><br>
>>><br>
>>> Any other ideas ?<br>
>>><br>
>>><br>
>>> Alain<br>
>>><br>
>>><br>
>>> PS : I heard that Octo is a nice company and you use Cassandra so I<br>
>>><br>
>>> guess you're fine in there :-). I wish you the best thanks for \
your<br> >>><br>
>>> help.<br>
>>><br>
>>><br>
>>> 2012/7/19 Olivier Mallassi <<a href="mailto:omallassi@octo.com" \
target="_blank">omallassi@octo.com</a>>:<br> >>><br>
>>> I got that a couple of time (due to DNS issues in our infra)<br>
>>><br>
>>><br>
>>> what you could try<br>
>>><br>
>>> - check in cassandra log. It is possible you see a log line telling \
you<br> >>><br>
>>> 10.56.62.211 and 10.59.21.241 o 10.58.83.109 share the same token<br>
>>><br>
>>> - if 10.56.62.211 is up, try decommission (via nodetool)<br>
>>><br>
>>> - if not, move 10.59.21.241 or 10.58.83.109 to current token + 1<br>
>>><br>
>>> - use removetoken (via nodetool) to remove the token associated with<br>
>>><br>
>>> 10.56.62.211. in case of failure, you can use removetoken -f \
instead.<br> >>><br>
>>><br>
>>> then, the unreachable IP should have disappeared.<br>
>>><br>
>>><br>
>>><br>
>>> HTH<br>
>>><br>
>>><br>
>>> On Thu, Jul 19, 2012 at 10:38 AM, Alain RODRIGUEZ <<a \
href="mailto:arodrime@gmail.com" target="_blank">arodrime@gmail.com</a>><br> \
>>><br> >>> wrote:<br>
>>><br>
>>><br>
>>> Hi,<br>
>>><br>
>>><br>
>>> I tried to add a node a few days ago and it failed. I finally made \
it<br> >>><br>
>>> work with an other node but now when I describe cluster on cli I got<br>
>>><br>
>>> this :<br>
>>><br>
>>><br>
>>> Cluster Information:<br>
>>><br>
>>> Snitch: org.apache.cassandra.locator.Ec2Snitch<br>
>>><br>
>>> Partitioner: org.apache.cassandra.dht.RandomPartitioner<br>
>>><br>
>>> Schema versions:<br>
>>><br>
>>> UNREACHABLE: [10.56.62.211]<br>
>>><br>
>>> e7e0ec6c-616e-32e7-ae29-40eae2b82ca8: [10.59.21.241, \
10.58.83.109]<br> >>><br>
>>><br>
>>> And nodetool ring gives me :<br>
>>><br>
>>><br>
>>> Address DC Rack Status State Load<br>
>>><br>
>>> Owns Token<br>
>>><br>
>>><br>
>>> 85070591730234615865843651857942052864<br>
>>><br>
>>> 10.59.21.241 eu-west 1b Up Normal 101.17 GB<br>
>>><br>
>>> 50.00% 0<br>
>>><br>
>>> 10.58.83.109 eu-west 1b Up Normal 55.27 GB<br>
>>><br>
>>> 50.00% 85070591730234615865843651857942052864<br>
>>><br>
>>><br>
>>> The point, as you can see, is that one of my node has twice the<br>
>>><br>
>>> information of the second one. I have a RF = 2 defined.<br>
>>><br>
>>><br>
>>> My guess is that the token 0 node keep data for the unreachable \
node.<br> >>><br>
>>><br>
>>> The IP of the unreachable node doesn't belong to me anymore, I have \
no<br> >>><br>
>>> access to this ghost node.<br>
>>><br>
>>><br>
>>> Does someone know how to completely remove this ghost node from my \
cluster<br> >>><br>
>>> ?<br>
>>><br>
>>><br>
>>> Thank you.<br>
>>><br>
>>><br>
>>> Alain<br>
>>><br>
>>><br>
>>> INFO :<br>
>>><br>
>>><br>
>>> On ubuntu (AMI Datastax 2.1 and 2.2)<br>
>>><br>
>>> Cassandra 1.1.2 (upgraded from 1.0.9)<br>
>>><br>
>>> 2 node cluster (+ the ghost one)<br>
>>><br>
>>> RF = 2<br>
>>><br>
>>><br>
>>><br>
>>><br>
>>><br>
>>> --<br>
>>><br>
>>> ............................................................<br>
>>><br>
>>> Olivier Mallassi<br>
>>><br>
>>> OCTO Technology<br>
>>><br>
>>> ............................................................<br>
>>><br>
>>> 50, Avenue des Champs-Elysées<br>
>>><br>
>>> 75008 Paris<br>
>>><br>
>>><br>
>>> Mobile: (33) 6 28 70 26 61<br>
>>><br>
>>> Tél: (33) 1 58 56 10 00<br>
>>><br>
>>> Fax: (33) 1 58 56 10 01<br>
>>><br>
>>><br>
>>> <a href="http://www.octo.com" \
target="_blank">http://www.octo.com</a><br> >>><br>
>>> Octo Talks! <a href="http://blog.octo.com" \
target="_blank">http://blog.octo.com</a><br> >>><br>
>>><br>
>>><br>
>>><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- \
<br>............................................................<br>Olivier \
Mallassi<br>OCTO Technology<br>............................................................<br>
50, Avenue des Champs-Elysées<br>75008 Paris<br><br>Mobile: (33) 6 28 70 26 \
61<br>Tél: (33) 1 58 56 10 00<br>Fax: (33) 1 58 56 10 01<br><br><a \
href="http://www.octo.com" target="_blank">http://www.octo.com</a> <br>Octo Talks! <a \
href="http://blog.octo.com" target="_blank">http://blog.octo.com</a><br>
<br><br>
</div>
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic