[prev in list] [next in list] [prev in thread] [next in thread] 

List:       gluster-users
Subject:    Re: [Gluster-users] Geo-Replication Faulty.
From:       Tiemen Ruiten <t.ruiten () rdmedia ! com>
Date:       2018-08-31 9:03:55
Message-ID: CAAegNz0BnJzndCD84HY49b7_FHmBNvNDW_aLVhbafQ8VOm+BRw () mail ! gmail ! com
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


I had the same issue a few weeks ago and found this thread which helped me
resolve it:
https://lists.gluster.org/pipermail/gluster-users/2018-July/034465.html

On 28 August 2018 at 08:50, Krishna Verma <kverma@cadence.com> wrote:

> Hi All,
>
>
>
> I need a help in setup geo replication as its getting faulty.
>
>
>
> I did the following :
>
>
>
>    1. Setup 2 node gluster with replicated volume.
>    2. Setup single node slave with gluster volume.
>    3. Setup geo-replication with master and slave but its status is
>    getting faulty.
>
>
>
> I have installed "glusterfs 4.1.2" on all the nodes.
>
>
>
> In logs I was getting below error:
>
>
>
> 2018-08-28 04:39:00.639724] E [syncdutils(worker
> /data/gluster/gv0):753:logerr] Popen: ssh> failure: execution of
> "/usr/local/sbin/glusterfs" failed with ENOENT (No such file or directory)
>
> My gluster binaries are in /use/sbin, So I did :
>
>
>
> gluster volume geo-replication glusterep gluster-poc-sj::glusterep config
> gluster_command_dir /usr/sbin/
>
> gluster volume geo-replication glusterep gluster-poc-sj::glusterep config
> slave_gluster_command_dir /usr/sbin/
>
>
>
> I also created the links as below :
>
>
>
> ln -s /usr/sbin/gluster /usr/local/sbin/gluster
>
> ln -s /usr/sbin/glusterfs /usr/local/sbin/glusterfs
>
>
>
> But status is still faulty after restarted glusterd and created a session
> again.
>
>
>
> MASTER NODE          MASTER VOL    MASTER BRICK         SLAVE USER
> SLAVE                        SLAVE NODE    STATUS    CRAWL STATUS
> LAST_SYNCED
>
> ------------------------------------------------------------
> ------------------------------------------------------------
> -----------------------------
>
> gluster-poc-noida    glusterep     /data/gluster/gv0    root
> gluster-poc-sj::glusterep    N/A           Faulty    N/A             N/A
>
> noi-poc-gluster      glusterep     /data/gluster/gv0    root
> gluster-poc-sj::glusterep    N/A           Faulty    N/A             N/A
>
>
>
> And now I am getting errors in logs for libraries.
>
>
>
> ============================================
>
> OSError: libgfchangelog.so: cannot open shared object file: No such file
> or directory
>
> [2018-08-28 06:46:46.667423] E [repce(worker /data/gluster/gv0):197:__call__]
> RepceClient: call failed  call=19929:140516964480832:1535438806.66
> method=init     error=OSError
>
> [2018-08-28 06:46:46.667567] E [syncdutils(worker
> /data/gluster/gv0):330:log_raise_exception] <top>: FAIL:
>
> Traceback (most recent call last):
>
>   File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 311, in
> main
>
>     func(args)
>
>   File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 72, in
> subcmd_worker
>
>     local.service_loop(remote)
>
>   File "/usr/libexec/glusterfs/python/syncdaemon/resource.py", line 1236,
> in service_loop
>
>     changelog_agent.init()
>
>   File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 216, in
> __call__
>
>     return self.ins(self.meth, *a)
>
>   File "/usr/libexec/glusterfs/python/syncdaemon/repce.py", line 198, in
> __call__
>
>     raise res
>
> OSError: libgfchangelog.so: cannot open shared object file: No such file
> or directory
>
> [2018-08-28 06:46:46.678463] I [repce(agent /data/gluster/gv0):80:service_loop]
> RepceServer: terminating on reaching EOF.
>
> [2018-08-28 06:46:47.662086] I [monitor(monitor):272:monitor] Monitor:
> worker died in startup phase     brick=/data/gluster/gv0
>
>
>
>
>
> Any help please.
>
>
>
>
>
> /Krish
>
>
>
>
>
>
>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Tiemen Ruiten
Systems Engineer
R&D Media

[Attachment #5 (text/html)]

<div dir="ltr"><div dir="ltr">I had the same issue a few weeks ago and found this \
thread which helped me resolve it:  <a \
href="https://lists.gluster.org/pipermail/gluster-users/2018-July/034465.html">https:/ \
/lists.gluster.org/pipermail/gluster-users/2018-July/034465.html</a></div></div><div \
class="gmail_extra"><br><div class="gmail_quote">On 28 August 2018 at 08:50, Krishna \
Verma <span dir="ltr">&lt;<a href="mailto:kverma@cadence.com" \
target="_blank">kverma@cadence.com</a>&gt;</span> wrote:<br><blockquote \
class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc \
solid;padding-left:1ex">





<div lang="EN-US" link="#0563C1" vlink="#954F72">
<div class="m_8383738825491484380WordSection1">
<p class="MsoNormal">Hi All,<u></u><u></u></p>
<p class="MsoNormal"><u></u>  <u></u></p>
<p class="MsoNormal">I need a help in setup geo replication as its getting faulty.
<u></u><u></u></p>
<p class="MsoNormal"><u></u>  <u></u></p>
<p class="MsoNormal">I did the following : <u></u><u></u></p>
<p class="MsoNormal"><u></u>  <u></u></p>
<ol style="margin-top:0in" start="1" type="1">
<li class="m_8383738825491484380MsoListParagraph" style="margin-left:0in">Setup 2 \
node gluster with replicated volume. <u></u><u></u></li><li \
class="m_8383738825491484380MsoListParagraph" style="margin-left:0in">Setup single \
node slave with gluster volume. <u></u><u></u></li><li \
class="m_8383738825491484380MsoListParagraph" style="margin-left:0in">Setup \
geo-replication with master and slave but its status is getting faulty. \
<u></u><u></u></li></ol> <p class="MsoNormal"><u></u>  <u></u></p>
<p class="MsoNormal">I have installed "glusterfs 4.1.2" on all the nodes. \
<u></u><u></u></p> <p class="MsoNormal"><u></u>  <u></u></p>
<p class="MsoNormal">In logs I was getting below error: <u></u><u></u></p>
<p class="MsoNormal"><u></u>  <u></u></p>
<p class="MsoNormal">2018-08-28 04:39:00.639724] E [syncdutils(worker \
/data/gluster/gv0):753:logerr] Popen: ssh&gt; failure: execution of \
&quot;/usr/local/sbin/glusterfs&quot; failed with ENOENT (No such file or \
directory)<u></u><u></u></p> <p class="MsoNormal">My gluster binaries are in \
/use/sbin, So I did :<u></u><u></u></p> <p class="MsoNormal"><u></u>  <u></u></p>
<p class="MsoNormal">gluster volume geo-replication glusterep \
gluster-poc-sj::glusterep config gluster_command_dir /usr/sbin/<u></u><u></u></p> <p \
class="MsoNormal">gluster volume geo-replication glusterep gluster-poc-sj::glusterep \
config slave_gluster_command_dir /usr/sbin/<u></u><u></u></p> <p \
class="MsoNormal"><u></u>  <u></u></p> <p class="MsoNormal">I also created the links \
as below : <u></u><u></u></p> <p class="MsoNormal"><u></u>  <u></u></p>
<p class="MsoNormal">ln -s /usr/sbin/gluster \
/usr/local/sbin/gluster<u></u><u></u></p> <p class="MsoNormal">ln -s \
/usr/sbin/glusterfs /usr/local/sbin/glusterfs<u></u><u></u></p> <p \
class="MsoNormal"><u></u>  <u></u></p> <p class="MsoNormal">But status is still \
faulty after restarted glusterd and created a session again. <u></u><u></u></p>
<p class="MsoNormal"><u></u>  <u></u></p>
<p class="MsoNormal">MASTER NODE                   MASTER VOL       MASTER BRICK      \
SLAVE USER       SLAVE                                               SLAVE NODE       \
STATUS       CRAWL STATUS       LAST_SYNCED<u></u><u></u></p> <p \
class="MsoNormal">------------------------------<wbr>------------------------------<wb \
r>------------------------------<wbr>------------------------------<wbr>-----------------------------<u></u><u></u></p>
 <p class="MsoNormal">gluster-poc-noida       glusterep         /data/gluster/gv0     \
root                   gluster-poc-sj::glusterep       N/A                     Faulty \
N/A                         N/A<u></u><u></u></p> <p \
class="MsoNormal">noi-poc-gluster           glusterep         /data/gluster/gv0       \
root                   gluster-poc-sj::glusterep       N/A                     Faulty \
N/A                         N/A<u></u><u></u></p> <p class="MsoNormal"><u></u>  \
<u></u></p> <p class="MsoNormal">And now I am getting errors in logs for libraries. \
<u></u><u></u></p> <p class="MsoNormal"><u></u>  <u></u></p>
<p class="MsoNormal">==============================<wbr>==============   \
<u></u><u></u></p> <p class="MsoNormal">OSError: libgfchangelog.so: cannot open \
shared object file: No such file or directory<u></u><u></u></p> <p \
class="MsoNormal">[2018-08-28 06:46:46.667423] E [repce(worker \
/data/gluster/gv0):197:__call_<wbr>_] RepceClient: call failed   \
call=19929:140516964480832:<wbr>1535438806.66           method=init         \
error=OSError<u></u><u></u></p> <p class="MsoNormal">[2018-08-28 06:46:46.667567] E \
[syncdutils(worker /data/gluster/gv0):330:log_<wbr>raise_exception] &lt;top&gt;: \
FAIL:<u></u><u></u></p> <p class="MsoNormal">Traceback (most recent call \
last):<u></u><u></u></p> <p class="MsoNormal">   File \
&quot;/usr/libexec/glusterfs/<wbr>python/syncdaemon/gsyncd.py&quot;, line 311, in \
main<u></u><u></u></p> <p class="MsoNormal">       func(args)<u></u><u></u></p>
<p class="MsoNormal">   File \
&quot;/usr/libexec/glusterfs/<wbr>python/syncdaemon/subcmds.py&quot;, line 72, in \
subcmd_worker<u></u><u></u></p> <p class="MsoNormal">       \
local.service_loop(remote)<u></u><u></u></p> <p class="MsoNormal">   File \
&quot;/usr/libexec/glusterfs/<wbr>python/syncdaemon/resource.py&quot;<wbr>, line \
1236, in service_loop<u></u><u></u></p> <p class="MsoNormal">       \
changelog_agent.init()<u></u><u></u></p> <p class="MsoNormal">   File \
&quot;/usr/libexec/glusterfs/<wbr>python/syncdaemon/repce.py&quot;, line 216, in \
__call__<u></u><u></u></p> <p class="MsoNormal">       return self.ins(self.meth, \
*a)<u></u><u></u></p> <p class="MsoNormal">   File \
&quot;/usr/libexec/glusterfs/<wbr>python/syncdaemon/repce.py&quot;, line 198, in \
__call__<u></u><u></u></p> <p class="MsoNormal">       raise res<u></u><u></u></p>
<p class="MsoNormal">OSError: libgfchangelog.so: cannot open shared object file: No \
such file or directory<u></u><u></u></p> <p class="MsoNormal">[2018-08-28 \
06:46:46.678463] I [repce(agent /data/gluster/gv0):80:service_<wbr>loop] RepceServer: \
terminating on reaching EOF.<u></u><u></u></p> <div \
style="border:none;border-bottom:double windowtext 2.25pt;padding:0in 0in 1.0pt 0in"> \
<p class="MsoNormal" style="border:none;padding:0in">[2018-08-28 06:46:47.662086] I \
[monitor(monitor):272:monitor] Monitor: worker died in startup phase         \
brick=/data/gluster/gv0<u></u><u></u></p> </div>
<p class="MsoNormal"><u></u>  <u></u></p>
<p class="MsoNormal"><u></u>  <u></u></p>
<p class="MsoNormal">Any help please.<u></u><u></u></p>
<p class="MsoNormal"><u></u>  <u></u></p>
<p class="MsoNormal"><u></u>  <u></u></p>
<p class="MsoNormal">/Krish<u></u><u></u></p>
<p class="MsoNormal"><u></u>  <u></u></p>
<p class="MsoNormal"><u></u>  <u></u></p>
<p class="MsoNormal"><u></u>  <u></u></p>
<p class="MsoNormal"><u></u>  <u></u></p>
</div>
</div>

<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org">Gluster-users@gluster.org</a><br>
<a href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" \
target="_blank">https://lists.gluster.org/<wbr>mailman/listinfo/gluster-users</a><br></blockquote></div><br><br \
clear="all"><div><br></div>-- <br><div class="gmail_signature" \
data-smartmail="gmail_signature"><div dir="ltr">Tiemen Ruiten<br>Systems \
Engineer<br>R&amp;D Media<br></div></div> </div>



_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic