[prev in list] [next in list] [prev in thread] [next in thread] 

List:       gluster-users
Subject:    Re: [Gluster-users] java application crushes while reading a zip file
From:       Dmitry Isakbayev <isakdim () gmail ! com>
Date:       2019-01-28 15:39:55
Message-ID: CA+DoD6e_kFsoCzdE=F5bH1dPbcMiDh2CiXnmdORgVPJVFnQp8w () mail ! gmail ! com
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


Amar,

Thank you for helping me troubleshoot the issues.  I don't have the
resources to test the software at this point, but I will keep it in mind.

Regards,
Dmitry


On Tue, Jan 22, 2019 at 1:02 AM Amar Tumballi Suryanarayan <
atumball@redhat.com> wrote:

> Dmitry,
>
> Thanks for the detailed updates on this thread. Let us know how your
> 'production' setup is running. For much smoother next upgrade, we request
> you to help out with some early testing of glusterfs-6 RC builds which are
> expected to be out by Feb 1st week.
>
> Also, if it is possible for you to automate the tests, it would be great
> to have it in our regression, so we can always be sure your setup would
> never break in future releases.
>
> Regards,
> Amar
>
> On Mon, Jan 7, 2019 at 11:42 PM Dmitry Isakbayev <isakdim@gmail.com>
> wrote:
>
>> This system is going into production.  I will try to replicate this
>> problem on the next installation.
>>
>> On Wed, Jan 2, 2019 at 9:25 PM Raghavendra Gowdappa <rgowdapp@redhat.com>
>> wrote:
>>
>>>
>>>
>>> On Wed, Jan 2, 2019 at 9:59 PM Dmitry Isakbayev <isakdim@gmail.com>
>>> wrote:
>>>
>>>> Still no JVM crushes.  Is it possible that running glusterfs with
>>>> performance options turned off for a couple of days cleared out the "stale
>>>> metadata issue"?
>>>>
>>>
>>> restarting these options, would've cleared the existing cache and hence
>>> previous stale metadata would've been cleared. Hitting stale metadata
>>> again  depends on races. That might be the reason you are still not seeing
>>> the issue. Can you try with enabling all perf xlators (default
>>> configuration)?
>>>
>>>
>>>>
>>>> On Mon, Dec 31, 2018 at 1:38 PM Dmitry Isakbayev <isakdim@gmail.com>
>>>> wrote:
>>>>
>>>>> The software ran with all of the options turned off over the weekend
>>>>> without any problems.
>>>>> I will try to collect the debug info for you.  I have re-enabled the 3
>>>>> three options, but yet to see the problem reoccurring.
>>>>>
>>>>>
>>>>> On Sat, Dec 29, 2018 at 6:46 PM Raghavendra Gowdappa <
>>>>> rgowdapp@redhat.com> wrote:
>>>>>
>>>>>> Thanks Dmitry. Can you provide the following debug info I asked
>>>>>> earlier:
>>>>>>
>>>>>> * strace -ff -v ... of java application
>>>>>> * dump of the I/O traffic seen by the mountpoint (use --dump-fuse
>>>>>> while mounting).
>>>>>>
>>>>>> regards,
>>>>>> Raghavendra
>>>>>>
>>>>>> On Sat, Dec 29, 2018 at 2:08 AM Dmitry Isakbayev <isakdim@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> These 3 options seem to trigger both (reading zip file and renaming
>>>>>>> files) problems.
>>>>>>>
>>>>>>> Options Reconfigured:
>>>>>>> performance.io-cache: off
>>>>>>> performance.stat-prefetch: off
>>>>>>> performance.quick-read: off
>>>>>>> performance.parallel-readdir: off
>>>>>>> *performance.readdir-ahead: on*
>>>>>>> *performance.write-behind: on*
>>>>>>> *performance.read-ahead: on*
>>>>>>> performance.client-io-threads: off
>>>>>>> nfs.disable: on
>>>>>>> transport.address-family: inet
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Dec 28, 2018 at 10:24 AM Dmitry Isakbayev <isakdim@gmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Turning a single option on at a time still worked fine.  I will
>>>>>>>> keep trying.
>>>>>>>>
>>>>>>>> We had used 4.1.5 on KVM/CentOS7.5 at AWS without these issues or
>>>>>>>> log messages.  Do you suppose these issues are triggered by the new
>>>>>>>> environment or did not exist in 4.1.5?
>>>>>>>>
>>>>>>>> [root@node1 ~]# glusterfs --version
>>>>>>>> glusterfs 4.1.5
>>>>>>>>
>>>>>>>> On AWS using
>>>>>>>> [root@node1 ~]# hostnamectl
>>>>>>>>    Static hostname: node1
>>>>>>>>          Icon name: computer-vm
>>>>>>>>            Chassis: vm
>>>>>>>>         Machine ID: b30d0f2110ac3807b210c19ede3ce88f
>>>>>>>>            Boot ID: 52bb159a0aa94043a40e7c7651967bd9
>>>>>>>>     Virtualization: kvm
>>>>>>>>   Operating System: CentOS Linux 7 (Core)
>>>>>>>>        CPE OS Name: cpe:/o:centos:centos:7
>>>>>>>>             Kernel: Linux 3.10.0-862.3.2.el7.x86_64
>>>>>>>>       Architecture: x86-64
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Fri, Dec 28, 2018 at 8:56 AM Raghavendra Gowdappa <
>>>>>>>> rgowdapp@redhat.com> wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Fri, Dec 28, 2018 at 7:23 PM Dmitry Isakbayev <
>>>>>>>>> isakdim@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Ok. I will try different options.
>>>>>>>>>>
>>>>>>>>>> This system is scheduled to go into production soon.  What
>>>>>>>>>> version would you recommend to roll back to?
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> These are long standing issues. So, rolling back may not make
>>>>>>>>> these issues go away. Instead if you think performance is agreeable to you,
>>>>>>>>> please keep these xlators off in production.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> On Thu, Dec 27, 2018 at 10:55 PM Raghavendra Gowdappa <
>>>>>>>>>> rgowdapp@redhat.com> wrote:
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Fri, Dec 28, 2018 at 3:13 AM Dmitry Isakbayev <
>>>>>>>>>>> isakdim@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Raghavendra,
>>>>>>>>>>>>
>>>>>>>>>>>> Thank  for the suggestion.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I am suing
>>>>>>>>>>>>
>>>>>>>>>>>> [root@jl-fanexoss1p glusterfs]# gluster --version
>>>>>>>>>>>> glusterfs 5.0
>>>>>>>>>>>>
>>>>>>>>>>>> On
>>>>>>>>>>>> [root@jl-fanexoss1p glusterfs]# hostnamectl
>>>>>>>>>>>>          Icon name: computer-vm
>>>>>>>>>>>>            Chassis: vm
>>>>>>>>>>>>         Machine ID: e44b8478ef7a467d98363614f4e50535
>>>>>>>>>>>>            Boot ID: eed98992fdda4c88bdd459a89101766b
>>>>>>>>>>>>     Virtualization: vmware
>>>>>>>>>>>>   Operating System: Red Hat Enterprise Linux Server 7.5 (Maipo)
>>>>>>>>>>>>        CPE OS Name: cpe:/o:redhat:enterprise_linux:7.5:GA:server
>>>>>>>>>>>>             Kernel: Linux 3.10.0-862.14.4.el7.x86_64
>>>>>>>>>>>>       Architecture: x86-64
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I have configured the following options
>>>>>>>>>>>>
>>>>>>>>>>>> [root@jl-fanexoss1p glusterfs]# gluster volume info
>>>>>>>>>>>> Volume Name: gv0
>>>>>>>>>>>> Type: Replicate
>>>>>>>>>>>> Volume ID: 5ffbda09-c5e2-4abc-b89e-79b5d8a40824
>>>>>>>>>>>> Status: Started
>>>>>>>>>>>> Snapshot Count: 0
>>>>>>>>>>>> Number of Bricks: 1 x 3 = 3
>>>>>>>>>>>> Transport-type: tcp
>>>>>>>>>>>> Bricks:
>>>>>>>>>>>> Brick1: jl-fanexoss1p.cspire.net:/data/brick1/gv0
>>>>>>>>>>>> Brick2: sl-fanexoss2p.cspire.net:/data/brick1/gv0
>>>>>>>>>>>> Brick3: nxquorum1p.cspire.net:/data/brick1/gv0
>>>>>>>>>>>> Options Reconfigured:
>>>>>>>>>>>> performance.io-cache: off
>>>>>>>>>>>> performance.stat-prefetch: off
>>>>>>>>>>>> performance.quick-read: off
>>>>>>>>>>>> performance.parallel-readdir: off
>>>>>>>>>>>> performance.readdir-ahead: off
>>>>>>>>>>>> performance.write-behind: off
>>>>>>>>>>>> performance.read-ahead: off
>>>>>>>>>>>> performance.client-io-threads: off
>>>>>>>>>>>> nfs.disable: on
>>>>>>>>>>>> transport.address-family: inet
>>>>>>>>>>>>
>>>>>>>>>>>> I don't know if it is related, but I am seeing a lot of
>>>>>>>>>>>> [2018-12-27 20:19:23.776080] W [MSGID: 114031]
>>>>>>>>>>>> [client-rpc-fops_v2.c:1932:client4_0_seek_cbk] 2-gv0-client-0: remote
>>>>>>>>>>>> operation failed [No such device or address]
>>>>>>>>>>>> [2018-12-27 20:19:47.735190] E [MSGID: 101191]
>>>>>>>>>>>> [event-epoll.c:671:event_dispatch_epoll_worker] 2-epoll: Failed to dispatch
>>>>>>>>>>>> handler
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> These msgs were introduced by patch [1]. To the best of my
>>>>>>>>>>> knowledge they are benign. We'll be sending a patch to fix these msgs
>>>>>>>>>>> though.
>>>>>>>>>>>
>>>>>>>>>>> +Mohit Agrawal <moagrawa@redhat.com> +Milind Changire
>>>>>>>>>>> <mchangir@redhat.com> . Can you try to identify why we are
>>>>>>>>>>> seeing these messages? If possible please send a patch to fix this.
>>>>>>>>>>>
>>>>>>>>>>> [1]
>>>>>>>>>>> https://review.gluster.org/r/I578c3fc67713f4234bd3abbec5d3fbba19059ea5
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> And java.io exceptions trying to rename files.
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> When you see the errors is it possible to collect,
>>>>>>>>>>> * strace of the java application (strace -ff -v ...)
>>>>>>>>>>> * fuse-dump of the glusterfs mount (use option --dump-fuse while
>>>>>>>>>>> mounting)?
>>>>>>>>>>>
>>>>>>>>>>> I also need another favour from you. By trail and error, can you
>>>>>>>>>>> point out which of the many performance xlators you've turned off is
>>>>>>>>>>> causing the issue?
>>>>>>>>>>>
>>>>>>>>>>> The above two data-points will help us to fix the problem.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> Thank You,
>>>>>>>>>>>> Dmitry
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Thu, Dec 27, 2018 at 3:48 PM Raghavendra Gowdappa <
>>>>>>>>>>>> rgowdapp@redhat.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> What version of glusterfs are you using? It might be either
>>>>>>>>>>>>> * a stale metadata issue.
>>>>>>>>>>>>> * inconsistent ctime issue.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Can you try turning off all performance xlators? If the issue
>>>>>>>>>>>>> is 1, that should help.
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Fri, Dec 28, 2018 at 1:51 AM Dmitry Isakbayev <
>>>>>>>>>>>>> isakdim@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Attempted to set 'performance.read-ahead off` according to
>>>>>>>>>>>>>> https://jira.apache.org/jira/browse/AMQ-7041
>>>>>>>>>>>>>> That did not help.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Mon, Dec 24, 2018 at 2:11 PM Dmitry Isakbayev <
>>>>>>>>>>>>>> isakdim@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> The core file generated by JVM suggests that it happens
>>>>>>>>>>>>>>> because the file is changing while it is being read -
>>>>>>>>>>>>>>> https://bugs.java.com/bugdatabase/view_bug.do?bug_id=8186557
>>>>>>>>>>>>>>> .
>>>>>>>>>>>>>>> The application reads in the zipfile and goes through the
>>>>>>>>>>>>>>> zip entries, then reloads the file and goes the zip entries again.  It does
>>>>>>>>>>>>>>> so 3 times.  The application never crushes on the 1st cycle but sometimes
>>>>>>>>>>>>>>> crushes on the 2nd or 3rd cycle.
>>>>>>>>>>>>>>> The zip file is generated about 20 seconds prior to it being
>>>>>>>>>>>>>>> used and is not updated or even used by any other application.  I have
>>>>>>>>>>>>>>> never seen this problem on a plain file system.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> I would appreciate any suggestions on how to go debugging
>>>>>>>>>>>>>>> this issue.  I can change the source code of the java application.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>>> Dmitry
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>> Gluster-users mailing list
>>>>>>>>>>>>>> Gluster-users@gluster.org
>>>>>>>>>>>>>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>>>>>>>>>>>>
>>>>>>>>>>>>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
> --
> Amar Tumballi (amarts)
>

[Attachment #5 (text/html)]

<div dir="ltr"><div dir="ltr">Amar,<div><br></div><div>Thank you for helping me \
troubleshoot the issues.   I don&#39;t have the resources to test the software at \
this point, but I will keep it in \
mind.</div><div><br></div><div>Regards,</div><div>Dmitry</div><div><br></div></div><br><div \
class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Jan 22, 2019 at 1:02 AM \
Amar Tumballi Suryanarayan &lt;<a \
href="mailto:atumball@redhat.com">atumball@redhat.com</a>&gt; \
wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px \
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div \
dir="ltr">Dmitry,<div><br></div><div>Thanks for the detailed updates on this thread. \
Let us know how your &#39;production&#39; setup is running. For much smoother next \
upgrade, we request you to help out with some early testing of glusterfs-6 RC builds \
which are expected to be out by Feb 1st week.</div><div><br></div><div>Also, if it is \
possible for you to automate the tests, it would be great to have it in our \
regression, so we can always be sure your setup would never break in future \
releases.</div><div><br></div><div>Regards,</div><div>Amar</div></div><br><div \
class="gmail_quote"><div dir="ltr" class="gmail-m_-8375905883047830647gmail_attr">On \
Mon, Jan 7, 2019 at 11:42 PM Dmitry Isakbayev &lt;<a href="mailto:isakdim@gmail.com" \
target="_blank">isakdim@gmail.com</a>&gt; wrote:<br></div><blockquote \
class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">This system is going \
into production.   I will try to replicate this problem on the next \
installation.</div><br><div class="gmail_quote"><div dir="ltr">On Wed, Jan 2, 2019 at \
9:25 PM Raghavendra Gowdappa &lt;<a href="mailto:rgowdapp@redhat.com" \
target="_blank">rgowdapp@redhat.com</a>&gt; wrote:<br></div><blockquote \
class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div \
class="gmail_quote"><div dir="ltr">On Wed, Jan 2, 2019 at 9:59 PM Dmitry Isakbayev \
&lt;<a href="mailto:isakdim@gmail.com" target="_blank">isakdim@gmail.com</a>&gt; \
wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px \
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div \
dir="ltr">Still no JVM crushes.   Is it possible that running glusterfs with \
performance options turned off for a couple of days cleared out the &quot;stale \
metadata issue&quot;?</div></div></blockquote><div><br></div><div>restarting these \
options, would&#39;ve cleared the existing cache and hence previous stale metadata \
would&#39;ve been cleared. Hitting stale metadata again   depends on races. That \
might be the reason you are still not seeing the issue. Can you try with enabling all \
perf xlators (default configuration)?<br><br></div><blockquote class="gmail_quote" \
style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div \
class="gmail_quote"><div dir="ltr">On Mon, Dec 31, 2018 at 1:38 PM Dmitry Isakbayev \
&lt;<a href="mailto:isakdim@gmail.com" target="_blank">isakdim@gmail.com</a>&gt; \
wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px \
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div \
dir="ltr"><div>The software ran with all of the options turned off over the weekend \
without any problems.</div><div>I will try to collect the debug info for you.   I \
have re-enabled the 3 three options, but yet to see the problem \
reoccurring.</div><div><br></div><br><div class="gmail_quote"><div dir="ltr">On Sat, \
Dec 29, 2018 at 6:46 PM Raghavendra Gowdappa &lt;<a href="mailto:rgowdapp@redhat.com" \
target="_blank">rgowdapp@redhat.com</a>&gt; wrote:<br></div><blockquote \
class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div><div>Thanks Dmitry. \
Can you provide the following debug info I asked earlier:<br><br></div>* strace -ff \
-v ... of java application<br></div>* dump of the I/O traffic seen by the mountpoint \
(use --dump-fuse while \
mounting).<br><br></div>regards,<br></div>Raghavendra<br></div><br><div \
class="gmail_quote"><div dir="ltr">On Sat, Dec 29, 2018 at 2:08 AM Dmitry Isakbayev \
&lt;<a href="mailto:isakdim@gmail.com" target="_blank">isakdim@gmail.com</a>&gt; \
wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px \
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div \
dir="ltr"><div>These 3 options seem to trigger both (reading zip file and renaming \
files) problems.</div><div dir="ltr"><br><div><div>Options \
Reconfigured:</div><div>performance.io-cache: \
off</div><div>performance.stat-prefetch: off</div><div>performance.quick-read: \
off</div><div>performance.parallel-readdir: \
off</div><div><b>performance.readdir-ahead: \
on</b></div><div><b>performance.write-behind: \
on</b></div><div><b>performance.read-ahead: \
on</b></div><div>performance.client-io-threads: off</div><div>nfs.disable: \
on</div><div>transport.address-family: \
inet</div></div><div><br></div></div></div><br><div class="gmail_quote"><div \
dir="ltr">On Fri, Dec 28, 2018 at 10:24 AM Dmitry Isakbayev &lt;<a \
href="mailto:isakdim@gmail.com" target="_blank">isakdim@gmail.com</a>&gt; \
wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px \
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div \
dir="ltr"><div dir="ltr">Turning a single option on at a time still worked fine.   I \
will keep trying.<div><br></div><div>We had used 4.1.5 on KVM/CentOS7.5 at AWS \
without these issues or log messages.   Do you suppose these issues are triggered by \
the new environment or did not exist in \
4.1.5?</div><div><div><br></div><div>[root@node1 ~]# glusterfs \
--version</div><div>glusterfs 4.1.5</div><div><br></div><div>On AWS \
using</div><div>[root@node1 ~]# hostnamectl<br></div><div>     Static hostname: \
node1</div><div>              Icon name: computer-vm</div><div>                 \
Chassis: vm</div><div>            Machine ID: \
b30d0f2110ac3807b210c19ede3ce88f</div><div>                 Boot ID: \
52bb159a0aa94043a40e7c7651967bd9</div><div>      Virtualization: kvm</div><div>   \
Operating System: CentOS Linux 7 (Core)</div><div>           CPE OS Name: \
cpe:/o:centos:centos:7</div><div>                  Kernel: Linux \
3.10.0-862.3.2.el7.x86_64</div><div>         Architecture: \
x86-64</div></div><div><br></div><div><br></div><div><br></div></div></div><br><div \
class="gmail_quote"><div dir="ltr">On Fri, Dec 28, 2018 at 8:56 AM Raghavendra \
Gowdappa &lt;<a href="mailto:rgowdapp@redhat.com" \
target="_blank">rgowdapp@redhat.com</a>&gt; wrote:<br></div><blockquote \
class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div \
class="gmail_quote"><div dir="ltr">On Fri, Dec 28, 2018 at 7:23 PM Dmitry Isakbayev \
&lt;<a href="mailto:isakdim@gmail.com" target="_blank">isakdim@gmail.com</a>&gt; \
wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px \
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div \
dir="ltr">Ok. I will try different options.<div><br></div><div>This system is \
scheduled to go into production soon.   What version would you recommend to roll back \
to?</div></div></div></blockquote><div><br></div><div>These are long standing issues. \
So, rolling back may not make these issues go away. Instead if you think performance \
is agreeable to you, please keep these xlators off in production.<br><br> \
</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px \
solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><br><div \
class="gmail_quote"><div dir="ltr">On Thu, Dec 27, 2018 at 10:55 PM Raghavendra \
Gowdappa &lt;<a href="mailto:rgowdapp@redhat.com" \
target="_blank">rgowdapp@redhat.com</a>&gt; wrote:<br></div><blockquote \
class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div \
dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr">On Fri, Dec 28, 2018 \
at 3:13 AM Dmitry Isakbayev &lt;<a href="mailto:isakdim@gmail.com" \
target="_blank">isakdim@gmail.com</a>&gt; wrote:<br></div><blockquote \
class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div \
dir="ltr"><div dir="ltr">Raghavendra,</div><div dir="ltr"><br></div><div>Thank   for \
the suggestion.    </div><div><br></div><div><br></div><div>I am suing  \
</div><div><br></div><div dir="ltr">[root@jl-fanexoss1p glusterfs]# gluster \
--version</div><div dir="ltr">glusterfs 5.0</div><div dir="ltr"><br></div><div>On  \
</div><div><div>[root@jl-fanexoss1p glusterfs]# hostnamectl</div><div>              \
Icon name: computer-vm<br></div><div>                 Chassis: vm</div><div>          \
Machine ID: e44b8478ef7a467d98363614f4e50535</div><div>                 Boot ID: \
eed98992fdda4c88bdd459a89101766b</div><div>      Virtualization: vmware</div><div>   \
Operating System: Red Hat Enterprise Linux Server 7.5 (Maipo)</div><div>           \
CPE OS Name: cpe:/o:redhat:enterprise_linux:7.5:GA:server</div><div>                  \
Kernel: Linux 3.10.0-862.14.4.el7.x86_64</div><div>         Architecture: \
x86-64</div></div><div><br></div><div dir="ltr"><br></div><div>I have configured the \
following options</div><div><br></div><div dir="ltr">[root@jl-fanexoss1p glusterfs]# \
gluster volume info</div><div dir="ltr">Volume Name: gv0</div><div dir="ltr">Type: \
Replicate</div><div dir="ltr">Volume ID: \
5ffbda09-c5e2-4abc-b89e-79b5d8a40824</div><div dir="ltr">Status: Started</div><div \
dir="ltr">Snapshot Count: 0</div><div dir="ltr">Number of Bricks: 1 x 3 = 3</div><div \
dir="ltr">Transport-type: tcp</div><div dir="ltr">Bricks:</div><div dir="ltr">Brick1: \
jl-fanexoss1p.cspire.net:/data/brick1/gv0</div><div dir="ltr">Brick2: \
sl-fanexoss2p.cspire.net:/data/brick1/gv0</div><div dir="ltr">Brick3: \
nxquorum1p.cspire.net:/data/brick1/gv0</div><div dir="ltr">Options \
Reconfigured:</div><div dir="ltr">performance.io-cache: off</div><div \
dir="ltr">performance.stat-prefetch: off</div><div dir="ltr">performance.quick-read: \
off</div><div dir="ltr">performance.parallel-readdir: off</div><div \
dir="ltr">performance.readdir-ahead: off</div><div \
dir="ltr">performance.write-behind: off</div><div dir="ltr">performance.read-ahead: \
off</div><div dir="ltr">performance.client-io-threads: off</div><div \
dir="ltr">nfs.disable: on</div><div dir="ltr">transport.address-family: \
inet</div><div><br></div><div>I don&#39;t know if it is related, but I am seeing a \
lot of  </div><div><div>[2018-12-27 20:19:23.776080] W [MSGID: 114031] \
[client-rpc-fops_v2.c:1932:client4_0_seek_cbk] 2-gv0-client-0: remote operation \
failed [No such device or address]</div><div>[2018-12-27 20:19:47.735190] E [MSGID: \
101191] [event-epoll.c:671:event_dispatch_epoll_worker] 2-epoll: Failed to dispatch \
handler<br></div></div></div></div></div></div></blockquote><div><br></div><div>These \
msgs were introduced by patch [1]. To the best of my knowledge they are benign. \
We&#39;ll be sending a patch to fix these msgs though.<br><br></div><div><a \
class="gmail_plusreply" \
id="gmail-m_-8375905883047830647gmail-m_3262694715610788536gmail-m_-502235180506454297 \
0gmail-m_4489251490435096256gmail-m_2801490928060886713gmail-m_2413649933841918042gmai \
l-m_267290886306757086gmail-m_-7472569709458346168gmail-m_-35936414046112708gmail-m_-6999465662090435597gmail-m_2417357566877831261plusReplyChip-1" \
href="mailto:moagrawa@redhat.com" target="_blank">+Mohit Agrawal</a>  <a \
class="gmail_plusreply" \
id="gmail-m_-8375905883047830647gmail-m_3262694715610788536gmail-m_-502235180506454297 \
0gmail-m_4489251490435096256gmail-m_2801490928060886713gmail-m_2413649933841918042gmai \
l-m_267290886306757086gmail-m_-7472569709458346168gmail-m_-35936414046112708gmail-m_-6999465662090435597gmail-m_2417357566877831261plusReplyChip-2" \
href="mailto:mchangir@redhat.com" target="_blank">+Milind Changire</a> . Can you try \
to identify why we are seeing these messages? If possible please send a patch to fix \
this.<br></div><div><br></div><div>[1] <a \
href="https://review.gluster.org/r/I578c3fc67713f4234bd3abbec5d3fbba19059ea5" \
target="_blank">https://review.gluster.org/r/I578c3fc67713f4234bd3abbec5d3fbba19059ea5</a><br><br></div><blockquote \
class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div \
dir="ltr"><div><div></div><div><br></div><div>And <a href="http://java.io" \
target="_blank">java.io</a> exceptions trying to rename \
files.</div></div></div></div></div></div></blockquote><div><br></div><div>When you \
see the errors is it possible to collect,<br></div><div>* strace of the java \
application (strace -ff -v ...)<br></div><div>* fuse-dump of the glusterfs mount (use \
option --dump-fuse while mounting)?<br><br></div><div>I also need another favour from \
you. By trail and error, can you point out which of the many performance xlators \
you&#39;ve turned off is causing the issue?<br><br></div><div>The above two \
data-points will help us to fix the problem.<br><br></div><blockquote \
class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div \
dir="ltr"><div><br class="gmail-m_-8375905883047830647gmail-m_3262694715610788536gmail \
-m_-5022351805064542970gmail-m_4489251490435096256gmail-m_2801490928060886713gmail-m_2 \
413649933841918042gmail-m_267290886306757086gmail-m_-7472569709458346168gmail-m_-35936 \
414046112708gmail-m_-6999465662090435597gmail-m_2417357566877831261gmail-m_-2992392695285668653gmail-Apple-interchange-newline"></div><div>Thank \
You,</div><div>Dmitry</div><div><br></div></div><br><div class="gmail_quote"><div \
dir="ltr">On Thu, Dec 27, 2018 at 3:48 PM Raghavendra Gowdappa &lt;<a \
href="mailto:rgowdapp@redhat.com" target="_blank">rgowdapp@redhat.com</a>&gt; \
wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px \
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div \
dir="ltr"><div>What version of glusterfs are you using? It might be either<br>* a \
stale metadata issue. <br></div>* inconsistent ctime issue.<br><br><div>Can you try \
turning off all performance xlators? If the issue is 1, that should \
help.<br></div></div><br><div class="gmail_quote"><div dir="ltr">On Fri, Dec 28, 2018 \
at 1:51 AM Dmitry Isakbayev &lt;<a href="mailto:isakdim@gmail.com" \
target="_blank">isakdim@gmail.com</a>&gt; wrote:<br></div><blockquote \
class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div dir="ltr">Attempted to set  <span \
style="color:rgb(36,41,46);font-family:-apple-system,BlinkMacSystemFont,&quot;Segoe \
UI&quot;,Helvetica,Arial,sans-serif,&quot;Apple Color Emoji&quot;,&quot;Segoe UI \
Emoji&quot;,&quot;Segoe UI Symbol&quot;;font-size:14px">&#39;performance.read-ahead \
off` according to  </span><a rel="nofollow" \
href="https://jira.apache.org/jira/browse/AMQ-7041" \
style="box-sizing:border-box;color:rgb(3,102,214);outline-width:0px;font-family:-apple-system,BlinkMacSystemFont,&quot;Segoe \
</blockquote></div>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" \
target="_blank">Gluster-users@gluster.org</a><br> <a \
href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" \
target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div>
 </blockquote></div></div></div></div>
</blockquote></div></div></div>
</blockquote></div></div>
</blockquote></div></div>
</blockquote></div></div>
</blockquote></div>
</blockquote></div>
</blockquote></div></div>
</blockquote></div></div>
</blockquote></div></div>
</blockquote></div></div>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" \
target="_blank">Gluster-users@gluster.org</a><br> <a \
href="https://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" \
target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div><br \
clear="all"><div><br></div>-- <br><div dir="ltr" \
class="gmail-m_-8375905883047830647gmail_signature"><div dir="ltr"><div><div \
dir="ltr"><div>Amar Tumballi (amarts)<br></div></div></div></div></div> \
</blockquote></div></div>



_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic