[prev in list] [next in list] [prev in thread] [next in thread] 

List:       gluster-users
Subject:    Re: [Gluster-users] [Gluster-devel]  3.7.13 & proxmox/qemu
From:       David Gossage <dgossage () carouselchecks ! com>
Date:       2016-07-27 13:37:34
Message-ID: CAJXeYEXm9SJ6ZbVC13SWCHXyLd7DXPFkNWxe4TFd3oX-VmdxHw () mail ! gmail ! com
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


On Tue, Jul 26, 2016 at 9:38 PM, Krutika Dhananjay <kdhananj@redhat.com>
wrote:

> Yes please, could you file a bug against glusterfs for this issue?
> 

https://bugzilla.redhat.com/show_bug.cgi?id=1360785


> 
> 
> -Krutika
> 
> On Wed, Jul 27, 2016 at 1:39 AM, David Gossage <
> dgossage@carouselchecks.com> wrote:
> 
> > Has a bug report been filed for this issue or should l I create one with
> > the logs and results provided so far?
> > 
> > *David Gossage*
> > *Carousel Checks Inc. | System Administrator*
> > *Office* 708.613.2284
> > 
> > On Fri, Jul 22, 2016 at 12:53 PM, David Gossage <
> > dgossage@carouselchecks.com> wrote:
> > 
> > > 
> > > 
> > > 
> > > On Fri, Jul 22, 2016 at 9:37 AM, Vijay Bellur <vbellur@redhat.com>
> > > wrote:
> > > 
> > > > On Fri, Jul 22, 2016 at 10:03 AM, Samuli Heinonen <
> > > > samppah@neutraali.net> wrote:
> > > > > Here is a quick way how to test this:
> > > > > GlusterFS 3.7.13 volume with default settings with brick on ZFS
> > > > dataset. gluster-test1 is server and gluster-test2 is client mounting with
> > > > FUSE.
> > > > > 
> > > > > Writing file with oflag=direct is not ok:
> > > > > [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
> > > > count=1 bs=1024000
> > > > > dd: failed to open ‘file': Invalid argument
> > > > > 
> > > > > Enable network.remote-dio on Gluster Volume:
> > > > > [root@gluster-test1 gluster]# gluster volume set gluster
> > > > network.remote-dio enable
> > > > > volume set: success
> > > > > 
> > > > > Writing small file with oflag=direct is ok:
> > > > > [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct
> > > > count=1 bs=1024000
> > > > > 1+0 records in
> > > > > 1+0 records out
> > > > > 1024000 bytes (1.0 MB) copied, 0.0103793 s, 98.7 MB/s
> > > > > 
> > > > > Writing bigger file with oflag=direct is ok:
> > > > > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
> > > > count=100 bs=1M
> > > > > 100+0 records in
> > > > > 100+0 records out
> > > > > 104857600 bytes (105 MB) copied, 1.10583 s, 94.8 MB/s
> > > > > 
> > > > > Enable Sharding on Gluster Volume:
> > > > > [root@gluster-test1 gluster]# gluster volume set gluster
> > > > features.shard enable
> > > > > volume set: success
> > > > > 
> > > > > Writing small file  with oflag=direct is ok:
> > > > > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
> > > > count=1 bs=1M
> > > > > 1+0 records in
> > > > > 1+0 records out
> > > > > 1048576 bytes (1.0 MB) copied, 0.0115247 s, 91.0 MB/s
> > > > > 
> > > > > Writing bigger file with oflag=direct is not ok:
> > > > > [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct
> > > > count=100 bs=1M
> > > > > dd: error writing ‘file3': Operation not permitted
> > > > > dd: closing output file ‘file3': Operation not permitted
> > > > > 
> > > > 
> > > > 
> > > > Thank you for these tests! would it be possible to share the brick and
> > > > client logs?
> > > > 
> > > 
> > > Not sure if his tests are same as my setup but here is what I end up with
> > > 
> > > Volume Name: glustershard
> > > Type: Replicate
> > > Volume ID: 0cc4efb6-3836-4caa-b24a-b3afb6e407c3
> > > Status: Started
> > > Number of Bricks: 1 x 3 = 3
> > > Transport-type: tcp
> > > Bricks:
> > > Brick1: 192.168.71.10:/gluster1/shard1/1
> > > Brick2: 192.168.71.11:/gluster1/shard2/1
> > > Brick3: 192.168.71.12:/gluster1/shard3/1
> > > Options Reconfigured:
> > > features.shard-block-size: 64MB
> > > features.shard: on
> > > server.allow-insecure: on
> > > storage.owner-uid: 36
> > > storage.owner-gid: 36
> > > cluster.server-quorum-type: server
> > > cluster.quorum-type: auto
> > > network.remote-dio: enable
> > > cluster.eager-lock: enable
> > > performance.stat-prefetch: off
> > > performance.io-cache: off
> > > performance.quick-read: off
> > > cluster.self-heal-window-size: 1024
> > > cluster.background-self-heal-count: 16
> > > nfs.enable-ino32: off
> > > nfs.addr-namelookup: off
> > > nfs.disable: on
> > > performance.read-ahead: off
> > > performance.readdir-ahead: on
> > > 
> > > 
> > > 
> > > dd if=/dev/zero \
> > > of=/rhev/data-center/mnt/glusterSD/192.168.71.11\:_glustershard/ oflag=direct \
> > > count=100 bs=1M 81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/ __DIRECT_IO_TEST__
> > > .trashcan/
> > > [root@ccengine2 ~]# dd if=/dev/zero of=/rhev/data-center/mnt/glusterSD/
> > > 192.168.71.11\:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test
> > > oflag=direct count=100 bs=1M
> > > dd: error writing \
> > > ‘/rhev/data-center/mnt/glusterSD/192.168.71.11:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test':
> > >  Operation not permitted
> > > 
> > > creates the 64M file in expected location then the shard is 0
> > > 
> > > # file:
> > > gluster1/shard1/1/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test
> > > 
> > > security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
> > >  trusted.afr.dirty=0x000000000000000000000000
> > > trusted.bit-rot.version=0x0200000000000000579231f3000e16e7
> > > trusted.gfid=0xec6de302b35f427985639ca3e25d9df0
> > > trusted.glusterfs.shard.block-size=0x0000000004000000
> > > 
> > > trusted.glusterfs.shard.file-size=0x0000000004000000000000000000000000000000000000010000000000000000
> > >  
> > > 
> > > # file: gluster1/shard1/1/.shard/ec6de302-b35f-4279-8563-9ca3e25d9df0.1
> > > 
> > > security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
> > >  trusted.afr.dirty=0x000000000000000000000000
> > > trusted.gfid=0x2bfd3cc8a727489b9a0474241548fe80
> > > 
> > > 
> > > > Regards,
> > > > Vijay
> > > > _______________________________________________
> > > > Gluster-users mailing list
> > > > Gluster-users@gluster.org
> > > > http://www.gluster.org/mailman/listinfo/gluster-users
> > > 
> > > 
> > > 
> > 
> > _______________________________________________
> > Gluster-devel mailing list
> > Gluster-devel@gluster.org
> > http://www.gluster.org/mailman/listinfo/gluster-devel
> > 
> 
> 


[Attachment #5 (text/html)]

<div dir="ltr"><div class="gmail_extra"><div><div class="gmail_signature"><div \
dir="ltr"><div>On Tue, Jul 26, 2016 at 9:38 PM, Krutika Dhananjay <span \
dir="ltr">&lt;<a href="mailto:kdhananj@redhat.com" \
target="_blank">kdhananj@redhat.com</a>&gt;</span> \
wrote:<br></div></div></div></div><div class="gmail_quote"><blockquote \
class="gmail_quote" style="margin:0px 0px 0px \
0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div \
dir="ltr"><div>Yes please, could you file a bug against glusterfs for this \
issue?<br></div></div></blockquote><div><br></div><div><a \
href="https://bugzilla.redhat.com/show_bug.cgi?id=1360785">https://bugzilla.redhat.com/show_bug.cgi?id=1360785</a></div><div> \
</div><blockquote class="gmail_quote" style="margin:0px 0px 0px \
0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div \
dir="ltr"><div><br><br></div>-Krutika</div><div class="gmail_extra"><br><div \
class="gmail_quote">On Wed, Jul 27, 2016 at 1:39 AM, David Gossage <span \
dir="ltr">&lt;<a href="mailto:dgossage@carouselchecks.com" \
target="_blank">dgossage@carouselchecks.com</a>&gt;</span> wrote:<br><blockquote \
class="gmail_quote" style="margin:0px 0px 0px \
0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div \
dir="ltr">Has a bug report been filed for this issue or should l I create one with \
the logs and results provided so far?</div><div class="gmail_extra"><span><br \
clear="all"><div><div><div dir="ltr"><span><font color="#888888"><span \
style="color:rgb(0,0,0)"><b><i>David Gossage</i></b></span><font><i><span \
style="color:rgb(51,51,51)"><b><br>

</b></span></i></font></font></span><div><span><font color="#888888"><font><i><span \
style="color:rgb(51,51,51)"></span></i><font size="1"><b \
style="color:rgb(153,0,0)">Carousel Checks Inc.<span style="color:rgb(204,204,204)"> \
| System Administrator</span></b></font></font><font \
style="color:rgb(153,153,153)"><font size="1"><br>



</font></font><font><font size="1"><span style="color:rgb(51,51,51)"><b \
style="color:rgb(153,153,153)">Office</b><span style="color:rgb(153,153,153)"> <a \
value="+17086132426">708.613.2284<font color="#888888"><font \
size="1"><br></font></font></a></span></span></font></font></font></span></div></div></div></div>
 <br></span><div><div><div class="gmail_quote">On Fri, Jul 22, 2016 at 12:53 PM, \
David Gossage <span dir="ltr">&lt;<a href="mailto:dgossage@carouselchecks.com" \
target="_blank">dgossage@carouselchecks.com</a>&gt;</span> wrote:<br><blockquote \
class="gmail_quote" style="margin:0px 0px 0px \
0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div \
dir="ltr"><div class="gmail_extra"><br clear="all"><div><div><div \
dir="ltr"><br></div></div></div> <br><div class="gmail_quote">On Fri, Jul 22, 2016 at \
9:37 AM, Vijay Bellur <span dir="ltr">&lt;<a href="mailto:vbellur@redhat.com" \
target="_blank">vbellur@redhat.com</a>&gt;</span> wrote:<br><blockquote \
class="gmail_quote" style="margin:0px 0px 0px \
0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">On \
Fri, Jul 22, 2016 at 10:03 AM, Samuli Heinonen &lt;<a \
href="mailto:samppah@neutraali.net" target="_blank">samppah@neutraali.net</a>&gt; \
wrote:<br> &gt; Here is a quick way how to test this:<br>
&gt; GlusterFS 3.7.13 volume with default settings with brick on ZFS dataset. \
gluster-test1 is server and gluster-test2 is client mounting with FUSE.<br> &gt;<br>
&gt; Writing file with oflag=direct is not ok:<br>
&gt; [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct count=1 \
bs=1024000<br> &gt; dd: failed to open ‘file': Invalid argument<br>
&gt;<br>
&gt; Enable network.remote-dio on Gluster Volume:<br>
&gt; [root@gluster-test1 gluster]# gluster volume set gluster network.remote-dio \
enable<br> &gt; volume set: success<br>
&gt;<br>
&gt; Writing small file with oflag=direct is ok:<br>
&gt; [root@gluster-test2 gluster]# dd if=/dev/zero of=file oflag=direct count=1 \
bs=1024000<br> &gt; 1+0 records in<br>
&gt; 1+0 records out<br>
&gt; 1024000 bytes (1.0 MB) copied, 0.0103793 s, 98.7 MB/s<br>
&gt;<br>
&gt; Writing bigger file with oflag=direct is ok:<br>
&gt; [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct count=100 \
bs=1M<br> &gt; 100+0 records in<br>
&gt; 100+0 records out<br>
&gt; 104857600 bytes (105 MB) copied, 1.10583 s, 94.8 MB/s<br>
&gt;<br>
&gt; Enable Sharding on Gluster Volume:<br>
&gt; [root@gluster-test1 gluster]# gluster volume set gluster features.shard \
enable<br> &gt; volume set: success<br>
&gt;<br>
&gt; Writing small file   with oflag=direct is ok:<br>
&gt; [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct count=1 \
bs=1M<br> &gt; 1+0 records in<br>
&gt; 1+0 records out<br>
&gt; 1048576 bytes (1.0 MB) copied, 0.0115247 s, 91.0 MB/s<br>
&gt;<br>
&gt; Writing bigger file with oflag=direct is not ok:<br>
&gt; [root@gluster-test2 gluster]# dd if=/dev/zero of=file3 oflag=direct count=100 \
bs=1M<br> &gt; dd: error writing ‘file3': Operation not permitted<br>
&gt; dd: closing output file ‘file3': Operation not permitted<br>
&gt;<br>
<br>
<br>
Thank you for these tests! would it be possible to share the brick and<br>
client logs?<br></blockquote><div><br></div><div>Not sure if his tests are same as my \
setup but here is what I end up with</div><div><br></div><div><div>Volume Name: \
glustershard</div><div>Type: Replicate</div><div>Volume ID: \
0cc4efb6-3836-4caa-b24a-b3afb6e407c3</div><div>Status: Started</div><div>Number of \
Bricks: 1 x 3 = 3</div><div>Transport-type: tcp</div><div>Bricks:</div><div>Brick1: \
192.168.71.10:/gluster1/shard1/1</div><div>Brick2: \
192.168.71.11:/gluster1/shard2/1</div><div>Brick3: \
192.168.71.12:/gluster1/shard3/1</div><div>Options \
Reconfigured:</div><div>features.shard-block-size: 64MB</div><div>features.shard: \
on</div><div>server.allow-insecure: on</div><div>storage.owner-uid: \
36</div><div>storage.owner-gid: 36</div><div>cluster.server-quorum-type: \
server</div><div>cluster.quorum-type: auto</div><div>network.remote-dio: \
enable</div><div>cluster.eager-lock: enable</div><div>performance.stat-prefetch: \
off</div><div>performance.io-cache: off</div><div>performance.quick-read: \
off</div><div>cluster.self-heal-window-size: \
1024</div><div>cluster.background-self-heal-count: 16</div><div>nfs.enable-ino32: \
off</div><div>nfs.addr-namelookup: off</div><div>nfs.disable: \
on</div><div>performance.read-ahead: off</div><div>performance.readdir-ahead: \
on</div></div><div><br></div><div><br></div><div><br></div><div>  dd if=/dev/zero \
of=/rhev/data-center/mnt/glusterSD/<a href="http://192.168.71.11" \
target="_blank">192.168.71.11</a>\:_glustershard/ oflag=direct count=100 \
bs=1M</div><div>81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/ __DIRECT_IO_TEST__              \
.trashcan/                                          </div><div>[root@ccengine2 ~]# dd \
if=/dev/zero of=/rhev/data-center/mnt/glusterSD/<a href="http://192.168.71.11" \
target="_blank">192.168.71.11</a>\:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test \
oflag=direct count=100 bs=1M</div><div>dd: error writing \
‘/rhev/data-center/mnt/glusterSD/192.168.71.11:_glustershard/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test': \
Operation not permitted</div><div><br></div><div>creates the 64M file in expected \
location then the shard is 0</div><div><br></div><div><div># file: \
gluster1/shard1/1/81e19cd3-ae45-449c-b716-ec3e4ad4c2f0/images/test</div><div>security. \
selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000</div><div \
>trusted.afr.dirty=0x000000000000000000000000</div><div>trusted.bit-rot.version=0x0200 \
> 000000000000579231f3000e16e7</div><div>trusted.gfid=0xec6de302b35f427985639ca3e25d9d \
> f0</div><div>trusted.glusterfs.shard.block-size=0x0000000004000000</div><div>trusted \
> .glusterfs.shard.file-size=0x0000000004000000000000000000000000000000000000010000000000000000</div></div><div><br></div><div><br></div><div><div># \
> file: gluster1/shard1/1/.shard/ec6de302-b35f-4279-8563-9ca3e25d9df0.1</div><div>secu \
> rity.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000</d \
> iv><div>trusted.afr.dirty=0x000000000000000000000000</div><div>trusted.gfid=0x2bfd3cc8a727489b9a0474241548fe80</div></div><div><br></div><blockquote \
> class="gmail_quote" style="margin:0px 0px 0px \
> 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
> 
<br>
Regards,<br>
Vijay<br>
_______________________________________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" \
target="_blank">Gluster-users@gluster.org</a><br> <a \
href="http://www.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" \
target="_blank">http://www.gluster.org/mailman/listinfo/gluster-users</a></blockquote></div><br></div></div>
 </blockquote></div><br></div></div></div>
<br>_______________________________________________<br>
Gluster-devel mailing list<br>
<a href="mailto:Gluster-devel@gluster.org" \
target="_blank">Gluster-devel@gluster.org</a><br> <a \
href="http://www.gluster.org/mailman/listinfo/gluster-devel" rel="noreferrer" \
target="_blank">http://www.gluster.org/mailman/listinfo/gluster-devel</a><br></blockquote></div><br></div>
 </blockquote></div><br></div></div>



_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic