[prev in list] [next in list] [prev in thread] [next in thread] 

List:       gluster-users
Subject:    Re: [Gluster-users] Gluster Performance - 12 Gbps SSDs and 10 Gbps NIC
From:       Gilberto Ferreira <gilberto.nunes32 () gmail ! com>
Date:       2023-12-14 12:59:25
Message-ID: CAOKSTBvnNCu-6LamHbHp7ZE5wo-w89QBF+_A+3vRSCHMKdJbiw () mail ! gmail ! com
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


Thanks for the advice.
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram






Em qui., 14 de dez. de 2023 Ã s 09:54, Strahil Nikolov <hunter86_bg@yahoo.com>
escreveu:

> Hi Gilberto,
> 
> 
> Have you checked
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-configuring_red_hat_storage_for_enhancing_performance
>  ?
> 
> I think that you will need to test the virt profile as the settings will
> prevent some bad situations - especially VM live migration.
> You should also consider sharding which can reduce healing time but also
> makes your life more difficult if you need to access the disks of the VMs.
> 
> I think that client.event-thread , server.event-thread and
> performance.io-thread-count can be tuned in your case. Consider setting ip
> a VM using the gluster volume as backing store and run the tests inside the
> VM to simulate real workload (best is to run a DB, webserver, etc inside
> a VM).
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> 
> 
> 
> 
> 
> On Wednesday, December 13, 2023, 2:34 PM, Gilberto Ferreira <
> gilberto.nunes32@gmail.com> wrote:
> 
> Hi all
> Aravinda, usually I set this in two server env and never get split brain:
> gluster vol set VMS cluster.heal-timeout 5
> gluster vol heal VMS enable
> gluster vol set VMS cluster.quorum-reads false
> gluster vol set VMS cluster.quorum-count 1
> gluster vol set VMS network.ping-timeout 2
> gluster vol set VMS cluster.favorite-child-policy mtime
> gluster vol heal VMS granular-entry-heal enable
> gluster vol set VMS cluster.data-self-heal-algorithm full
> gluster vol set VMS features.shard on
> 
> Strahil, in general, I get 0,06ms with 1G dedicated NIC.
> My env are very simple, using Proxmox + QEMU/KVM, with 3 or 5 VM.
> 
> 
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
> 
> 
> 
> 
> 
> 
> Em qua., 13 de dez. de 2023 Ã s 06:08, Strahil Nikolov <
> hunter86_bg@yahoo.com> escreveu:
> 
> Hi Aravinda,
> 
> Based on the output it's a ‘replica 3 arbiter 1' type.
> 
> Gilberto,
> What's the latency between the nodes ?
> 
> Best Regards,
> Strahil Nikolov
> 
> 
> 
> On Wednesday, December 13, 2023, 7:36 AM, Aravinda <aravinda@kadalu.tech>
> wrote:
> 
> Only Replica 2 or Distributed Gluster volumes can be created with two
> servers. High chance of split brain with Replica 2 compared to Replica 3
> volume.
> 
> For NFS Ganesha, no issue exporting the volume even if only one server is
> available. Run NFS Ganesha servers in Gluster server nodes and NFS clients
> from the network can connect to any NFS Ganesha server.
> 
> You can use Haproxy + Keepalived (or any other load balancer) if high
> availability required for the NFS Ganesha connections (Ex: If a server node
> goes down, then nfs client can connect to other NFS ganesha server node).
> 
> --
> Aravinda
> Kadalu Technologies
> 
> 
> 
> ---- On Wed, 13 Dec 2023 01:42:11 +0530 *Gilberto Ferreira
> <gilberto.nunes32@gmail.com <gilberto.nunes32@gmail.com>>* wrote ---
> 
> Ah that's nice.
> Somebody knows this can be achieved with two servers?
> 
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
> 
> 
> 
> 
> 
> 
> 
> Em ter., 12 de dez. de 2023 Ã s 17:08, Danny <dbray925+gluster@gmail.com>
> escreveu:
> 
> ________
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
> Wow, HUGE improvement with NFS-Ganesha!
> 
> 
> sudo dnf -y install glusterfs-ganesha
> sudo vim /etc/ganesha/ganesha.conf
> 
> NFS_CORE_PARAM {
> mount_path_pseudo = true;
> Protocols = 3,4;
> }
> EXPORT_DEFAULTS {
> Access_Type = RW;
> }
> 
> LOG {
> Default_Log_Level = WARN;
> }
> 
> EXPORT{
> Export_Id = 1 ;     # Export ID unique to each export
> Path = "/data";     # Path of the volume to be exported
> 
> FSAL {
> name = GLUSTER;
> hostname = "localhost"; # IP of one of the nodes in the trusted
> pool
> volume = "data";        # Volume name. Eg: "test_volume"
> }
> 
> Access_type = RW;           # Access permissions
> Squash = No_root_squash;    # To enable/disable root squashing
> Disable_ACL = TRUE;         # To enable/disable ACL
> Pseudo = "/data";           # NFSv4 pseudo path for this export
> Protocols = "3","4" ;       # NFS protocols supported
> Transports = "UDP","TCP" ;  # Transport protocols supported
> SecType = "sys";            # Security flavors supported
> }
> 
> 
> sudo systemctl enable --now nfs-ganesha
> sudo vim /etc/fstab
> 
> localhost:/data             /data                 nfs
> defaults,_netdev          0 0
> 
> 
> sudo systemctl daemon-reload
> sudo mount -a
> 
> fio --name=test --filename=/data/wow --size=1G --readwrite=write
> 
> Run status group 0 (all jobs):
> WRITE: bw=2246MiB/s (2355MB/s), 2246MiB/s-2246MiB/s (2355MB/s-2355MB/s),
> io=1024MiB (1074MB), run=456-456msec
> 
> Yeah 2355MB/s is much better than the original 115MB/s
> 
> So in the end, I guess FUSE isn't the best choice.
> 
> On Tue, Dec 12, 2023 at 3:00 PM Gilberto Ferreira <
> gilberto.nunes32@gmail.com> wrote:
> 
> Fuse there some overhead.
> Take a look at libgfapi:
> 
> https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/libgfapi/
> 
> I know this doc somehow is out of date, but could be a hint
> 
> 
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
> 
> 
> 
> 
> 
> 
> 
> Em ter., 12 de dez. de 2023 Ã s 16:29, Danny <dbray925+gluster@gmail.com>
> escreveu:
> 
> Nope, not a caching thing. I've tried multiple different types of fio
> tests, all produce the same results. Gbps when hitting the disks locally,
> slow MB\s when hitting the Gluster FUSE mount.
> 
> I've been reading up on glustr-ganesha, and will give that a try.
> 
> On Tue, Dec 12, 2023 at 1:58 PM Ramon Selga <ramon.selga@gmail.com> wrote:
> 
> Dismiss my first question: you have SAS 12Gbps SSDs  Sorry!
> 
> El 12/12/23 a les 19:52, Ramon Selga ha escrit:
> 
> May ask you which kind of disks you have in this setup? rotational, ssd
> SAS/SATA, nvme?
> 
> Is there a RAID controller with writeback caching?
> 
> It seems to me your fio test on local brick has a unclear result due to
> some caching.
> 
> Try something like (you can consider to increase test file size depending
> of your caching memory) :
> 
> fio --size=16G --name=test --filename=/gluster/data/brick/wow --bs=1M
> --nrfiles=1 --direct=1 --sync=0 --randrepeat=0 --rw=write --refill_buffers
> --end_fsync=1 --iodepth=200 --ioengine=libaio
> 
> Also remember a replica 3 arbiter 1 volume writes synchronously to two
> data bricks, halving throughput of your network backend.
> 
> Try similar fio on gluster mount but I hardly see more than 300MB/s
> writing sequentially on only one fuse mount even with nvme backend. On the
> other side, with 4 to 6 clients, you can easily reach 1.5GB/s of aggregate
> throughput
> 
> To start, I think is better to try with default parameters for your
> replica volume.
> 
> Best regards!
> 
> Ramon
> 
> 
> El 12/12/23 a les 19:10, Danny ha escrit:
> 
> Sorry, I noticed that too after I posted, so I instantly upgraded to 10.
> Issue remains.
> 
> On Tue, Dec 12, 2023 at 1:09 PM Gilberto Ferreira <
> gilberto.nunes32@gmail.com> wrote:
> 
> I strongly suggest you update to version 10 or higher.
> It's come with significant improvement regarding performance.
> ---
> Gilberto Nunes Ferreira
> (47) 99676-7530 - Whatsapp / Telegram
> 
> 
> 
> 
> 
> 
> Em ter., 12 de dez. de 2023 Ã s 13:03, Danny <dbray925+gluster@gmail.com>
> escreveu:
> 
> MTU is already 9000, and as you can see from the IPERF results, I've got a
> nice, fast connection between the nodes.
> 
> On Tue, Dec 12, 2023 at 9:49 AM Strahil Nikolov <hunter86_bg@yahoo.com>
> wrote:
> 
> Hi,
> 
> Let's try the simple things:
> 
> Check if you can use MTU9000 and if it's possible, set it on the Bond
> Slaves and the bond devices:
> ping GLUSTER_PEER -c 10 -M do -s 8972
> 
> Then try to follow up the recommendations from
> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-configuring_red_hat_storage_for_enhancing_performance
>  
> 
> 
> Best Regards,
> Strahil Nikolov
> 
> On Monday, December 11, 2023, 3:32 PM, Danny <dbray925+gluster@gmail.com>
> wrote:
> 
> Hello list, I'm hoping someone can let me know what setting I missed.
> 
> Hardware:
> Dell R650 servers, Dual 24 Core Xeon 2.8 GHz, 1 TB RAM
> 8x SSD s Negotiated Speed 12 Gbps
> PERC H755 Controller - RAID 6
> Created virtual "data" disk from the above 8 SSD drives, for a ~20 TB
> /dev/sdb
> 
> OS:
> CentOS Stream
> kernel-4.18.0-526.el8.x86_64
> glusterfs-7.9-1.el8.x86_64
> 
> IPERF Test between nodes:
> [ ID] Interval           Transfer     Bitrate         Retr
> [  5]   0.00-10.00  sec  11.5 GBytes  9.90 Gbits/sec    0
> sender
> [  5]   0.00-10.04  sec  11.5 GBytes  9.86 Gbits/sec
> receiver
> 
> All good there. ~10 Gbps, as expected.
> 
> LVM Install:
> export DISK="/dev/sdb"
> sudo parted --script $DISK "mklabel gpt"
> sudo parted --script $DISK "mkpart primary 0% 100%"
> sudo parted --script $DISK "set 1 lvm on"
> sudo pvcreate --dataalignment 128K /dev/sdb1
> sudo vgcreate --physicalextentsize 128K gfs_vg /dev/sdb1
> sudo lvcreate -L 16G -n gfs_pool_meta gfs_vg
> sudo lvcreate -l 95%FREE -n gfs_pool gfs_vg
> sudo lvconvert --chunksize 1280K --thinpool gfs_vg/gfs_pool --poolmetadata
> gfs_vg/gfs_pool_meta
> sudo lvchange --zero n gfs_vg/gfs_pool
> sudo lvcreate -V 19.5TiB --thinpool gfs_vg/gfs_pool -n gfs_lv
> sudo mkfs.xfs -f -i size=512 -n size=8192 -d su=128k,sw=10
> /dev/mapper/gfs_vg-gfs_lv
> sudo vim /etc/fstab
> /dev/mapper/gfs_vg-gfs_lv   /gluster/data/brick   xfs
> rw,inode64,noatime,nouuid 0 0
> 
> sudo systemctl daemon-reload && sudo mount -a
> fio --name=test --filename=/gluster/data/brick/wow --size=1G
> --readwrite=write
> 
> Run status group 0 (all jobs):
> WRITE: bw=2081MiB/s (2182MB/s), 2081MiB/s-2081MiB/s (2182MB/s-2182MB/s),
> io=1024MiB (1074MB), run=492-492msec
> 
> All good there. 2182MB/s =~ 17.5 Gbps. Nice!
> 
> 
> Gluster install:
> export NODE1='10.54.95.123'
> export NODE2='10.54.95.124'
> export NODE3='10.54.95.125'
> sudo gluster peer probe $NODE2
> sudo gluster peer probe $NODE3
> sudo gluster volume create data replica 3 arbiter 1
> $NODE1:/gluster/data/brick $NODE2:/gluster/data/brick
> $NODE3:/gluster/data/brick force
> sudo gluster volume set data network.ping-timeout 5
> sudo gluster volume set data performance.client-io-threads on
> sudo gluster volume set data group metadata-cache
> sudo gluster volume start data
> sudo gluster volume info all
> 
> Volume Name: data
> Type: Replicate
> Volume ID: b52b5212-82c8-4b1a-8db3-52468bc0226e
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: 10.54.95.123:/gluster/data/brick
> Brick2: 10.54.95.124:/gluster/data/brick
> Brick3: 10.54.95.125:/gluster/data/brick (arbiter)
> Options Reconfigured:
> network.inode-lru-limit: 200000
> performance.md-cache-timeout: 600
> performance.cache-invalidation: on
> performance.stat-prefetch: on
> features.cache-invalidation-timeout: 600
> features.cache-invalidation: on
> network.ping-timeout: 5
> transport.address-family: inet
> storage.fips-mode-rchecksum: on
> nfs.disable: on
> performance.client-io-threads: on
> 
> sudo vim /etc/fstab
> localhost:/data             /data                 glusterfs
> defaults,_netdev      0 0
> 
> sudo systemctl daemon-reload && sudo mount -a
> fio --name=test --filename=/data/wow --size=1G --readwrite=write
> 
> Run status group 0 (all jobs):
> WRITE: bw=109MiB/s (115MB/s), 109MiB/s-109MiB/s (115MB/s-115MB/s),
> io=1024MiB (1074MB), run=9366-9366msec
> 
> Oh no, what's wrong? From 2182MB/s down to only 115MB/s? What am I
> missing? I'm not expecting the above ~17 Gbps, but I'm thinking it should
> at least be close(r) to ~10 Gbps.
> 
> Any suggestions?
> ________
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
> ________
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
> 
> ________
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing \
> listGluster-users@gluster.orghttps://lists.gluster.org/mailman/listinfo/gluster-users
>  
> 
> 
> ________
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
> ________
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
> ________
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
> 
> ________
> 
> 
> 
> Community Meeting Calendar:
> 
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
> Gluster-users mailing list
> Gluster-users@gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-users
> 
> 


[Attachment #5 (text/html)]

<div dir="ltr">Thanks for the advice.<br clear="all"><div><div dir="ltr" \
class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div \
dir="ltr"><div dir="ltr"><div dir="ltr"><div \
dir="ltr"><div>---</div><div><div><div>Gilberto Nunes Ferreira</div></div><div><span \
style="font-size:12.8px">(47) 99676-7530 - Whatsapp / \
Telegram</span><br></div><div><p style="font-size:12.8px;margin:0px"></p><p \
style="font-size:12.8px;margin:0px"><br></p><p \
style="font-size:12.8px;margin:0px"><br></p></div></div><div><br></div></div></div></div></div></div></div></div><br></div><br><div \
class="gmail_quote"><div dir="ltr" class="gmail_attr">Em qui., 14 de dez. de 2023 Ã s \
09:54, Strahil Nikolov &lt;<a \
href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>&gt; \
escreveu:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px \
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div> Hi \
Gilberto,<div><br></div><div><br></div><div>Have you checked  <a \
href="https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-configuring_red_hat_storage_for_enhancing_performance" \
target="_blank">https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/ \
3.5/html/administration_guide/chap-configuring_red_hat_storage_for_enhancing_performance</a> \
?</div><div><br></div><div>I think that you will need to test the virt profile as the \
settings  will prevent some bad situations - especially VM live \
migration.</div><div>You should also consider sharding which can reduce healing time \
but also makes your life more difficult if you need to access the disks of the \
VMs.</div><div><br></div><div>I think that client.event-thread ,  server.event-thread \
and performance.io-thread-count  can be tuned in your case. Consider setting ip a VM \
using the gluster volume as backing store and run the tests inside the VM to simulate \
real workload (best is to run a DB, webserver, etc inside a  \
VM).<span></span></div><div><br></div><div>Best Regards,</div><div>Strahil Nikolov  \
</div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br><br><div></div><br><p \
style="font-size:15px;color:rgb(113,95,250);padding-top:15px;margin-top:0px">On \
Wednesday, December 13, 2023, 2:34 PM, Gilberto Ferreira &lt;<a \
href="mailto:gilberto.nunes32@gmail.com" \
target="_blank">gilberto.nunes32@gmail.com</a>&gt; wrote:</p><blockquote><div \
id="m_587248722840527071yiv0126011018"><div><div dir="ltr">Hi all<div>Aravinda, \
usually I set this in two server env and never get split brain:</div><div><span \
style="color:rgb(0,0,0)">gluster vol set VMS cluster.heal-timeout 5 </span><br \
clear="none">gluster vol heal VMS enable <br clear="none">gluster vol set VMS \
cluster.quorum-reads false <br clear="none">gluster vol set VMS cluster.quorum-count \
1 <br clear="none">gluster vol set VMS network.ping-timeout 2
<br clear="none">gluster vol set VMS cluster.favorite-child-policy mtime
<br clear="none">gluster vol heal VMS granular-entry-heal enable
<br clear="none">gluster vol set VMS cluster.data-self-heal-algorithm full
<br clear="none">gluster vol set VMS features.shard on</div><div><br \
clear="none"></div><div>Strahil, in general, I get 0,06ms with 1G dedicated \
NIC.</div><div>My env are very simple, using Proxmox  + QEMU/KVM, with 3 or 5 \
VM.</div><div><br clear="none"> <br clear="none"><div><div dir="ltr"><div \
dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div \
dir="ltr"><div>---</div><div><div><div>Gilberto Nunes Ferreira</div></div><div><span \
style="font-size:12.8px">(47) 99676-7530 - Whatsapp / Telegram</span><br \
clear="none"></div><div><p style="font-size:12.8px;margin:0px"></p><p \
style="font-size:12.8px;margin:0px"><br clear="none"></p><p \
style="font-size:12.8px;margin:0px"><br clear="none"></p></div></div><div><br \
clear="none"></div></div></div></div></div></div></div></div><br \
clear="none"></div></div><br clear="none"><div \
id="m_587248722840527071yiv0126011018yqt95109"><div><div dir="ltr">Em qua., 13 de \
dez. de 2023 Ã s 06:08, Strahil Nikolov &lt;<a rel="nofollow noopener noreferrer" \
shape="rect" href="mailto:hunter86_bg@yahoo.com" \
target="_blank">hunter86_bg@yahoo.com</a>&gt; escreveu:<br \
clear="none"></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div> Hi Aravinda,<div><br \
clear="none"></div><div>Based on the output it's a ‘replica 3 arbiter 1' \
type.</div><div><br clear="none"></div><div>Gilberto,</div><div>What's the latency \
between the nodes ?</div><div><br clear="none"></div><div>Best \
Regards,</div><div>Strahil Nikolov  <br clear="none"><br clear="none"><br \
clear="none"><div></div><br clear="none"><p \
style="font-size:15px;color:rgb(113,95,250);padding-top:15px;margin-top:0px">On \
Wednesday, December 13, 2023, 7:36 AM, Aravinda &lt;aravinda@kadalu.tech&gt; \
wrote:</p><blockquote><div \
id="m_587248722840527071yiv0126011018m_-2382815818654791254yiv6351224509"><div><div \
style="font-family:Verdana,Arial,Helvetica,sans-serif;font-size:10pt"><div>Only \
Replica 2 or Distributed Gluster volumes can be created with two servers. High chance \
of split brain with Replica 2 compared to Replica 3 volume.<br \
clear="none"></div><div><br clear="none"></div><div>For NFS Ganesha, no issue \
exporting the volume even if only one server is available. Run NFS Ganesha servers in \
Gluster server nodes and NFS clients from the network can connect to any NFS Ganesha \
server.<br clear="none"></div><div><br clear="none"></div><div>You can use Haproxy + \
Keepalived (or any other load balancer) if high availability required for the NFS \
Ganesha connections (Ex: If a server node goes down, then nfs client can connect to \
other NFS ganesha server node).<br clear="none"></div><div><br \
clear="none"></div><div>--</div><div \
id="m_587248722840527071yiv0126011018m_-2382815818654791254yiv6351224509Zm-_Id_-Sgn"><div>Aravinda<br \
clear="none"></div><div>Kadalu Technologies</div></div><div><br \
clear="none"></div><div style="border-top:1px solid \
rgb(204,204,204);min-height:0px;margin-top:10px;margin-bottom:10px;line-height:0px"><br \
clear="none"></div><div><div><br clear="none"></div><div \
id="m_587248722840527071yiv0126011018m_-2382815818654791254yiv6351224509Zm-_Id_-Sgn1">---- \
On Wed, 13 Dec 2023 01:42:11 +0530 <b>Gilberto Ferreira &lt;<a rel="nofollow noopener \
noreferrer" shape="rect" href="mailto:gilberto.nunes32@gmail.com" \
target="_blank">gilberto.nunes32@gmail.com</a>&gt;</b> wrote ---<br \
clear="none"></div><div><br clear="none"></div><blockquote \
id="m_587248722840527071yiv0126011018m_-2382815818654791254yiv6351224509blockquote_zmail" \
style="margin:0px"><div><div dir="ltr">Ah that&#39;s nice.<div>Somebody knows this \
can be achieved with two servers?<br clear="none"><br clear="none"><div><div \
dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div \
dir="ltr"><div>---<br clear="none"></div><div><div><div>Gilberto Nunes Ferreira<br \
clear="none"></div></div><div><span style="font-size:12.8px">(47) 99676-7530 - \
Whatsapp / Telegram</span><br clear="none"></div><div><p style="margin:0px"><span \
style="font-size:12.8px;margin:0px"></span><br clear="none"></p><p \
style="margin:0px"><span style="font-size:12.8px;margin:0px"><br \
clear="none"></span></p><p style="margin:0px"><span \
style="font-size:12.8px;margin:0px"><br clear="none"></span></p></div></div><div><br \
clear="none"></div></div></div></div></div></div></div></div><br \
clear="none"></div></div><br clear="none"><div><div dir="ltr">Em ter., 12 de dez. de \
2023 Ã s 17:08, Danny &lt;<a rel="nofollow noopener noreferrer" shape="rect" \
href="mailto:dbray925%2Bgluster@gmail.com" \
target="_blank">dbray925+gluster@gmail.com</a>&gt; escreveu:<br \
clear="none"></div><div \
id="m_587248722840527071yiv0126011018m_-2382815818654791254yiv6351224509zmail_block"><br \
clear="none"></div></div>________<br clear="none"> <br clear="none"> <br \
clear="none"> <br clear="none">Community Meeting Calendar: <br clear="none"> <br \
clear="none">Schedule - <br clear="none">Every 2nd and 4th Tuesday at 14:30 IST / \
09:00 UTC <br clear="none">Bridge: <a rel="nofollow noopener noreferrer" shape="rect" \
href="https://meet.google.com/cpu-eiue-hvk" \
target="_blank">https://meet.google.com/cpu-eiue-hvk</a> <br \
clear="none">Gluster-users mailing list <br clear="none"><a rel="nofollow noopener \
noreferrer" shape="rect" href="mailto:Gluster-users@gluster.org" \
target="_blank">Gluster-users@gluster.org</a> <br clear="none"><a rel="nofollow \
noopener noreferrer" shape="rect" \
href="https://lists.gluster.org/mailman/listinfo/gluster-users" \
target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a> <br \
clear="none"></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Wow, HUGE improvement with \
NFS-Ganesha!<br clear="none"></div><div><br clear="none"></div><div><br \
clear="none"></div><div><div><code>sudo</code> <code>dnf -y \
</code><code>install</code> <code>glusterfs-ganesha</code><br \
clear="none"></div><div><div title="Hint: double-click to select \
code"><div><code>sudo</code> <code>vim \
</code><code>/etc/ganesha/ganesha</code><code>.conf</code><br \
clear="none"></div></div></div><div><br clear="none"></div><div><div title="Hint: \
double-click to select code"><div><code>NFS_CORE_PARAM {</code><br \
clear="none"></div><div><code>        </code><code>mount_path_pseudo = \
true;</code><br clear="none"></div><div><code>        </code><code>Protocols = \
3,4;</code><br clear="none"></div><div><code>}</code><br \
clear="none"></div><div><code>EXPORT_DEFAULTS {</code><br \
clear="none"></div><div><code>        </code><code>Access_Type = RW;</code><br \
clear="none"></div><div><code>}</code><br clear="none"></div><div>  <br \
clear="none"></div><div><code>LOG {</code><br clear="none"></div><div><code>        \
</code><code>Default_Log_Level = WARN;</code><br \
clear="none"></div><div><code>}</code><br clear="none"></div><div>  <br \
clear="none"></div><div><code>EXPORT{</code><br clear="none"></div><div><code>        \
</code><code>Export_Id = 1 ;         # Export ID unique to each export</code><br \
clear="none"></div><div><code>        </code><code>Path = &quot;/data&quot;;         \
# Path of the volume to be exported</code><br clear="none"></div><div>  <br \
clear="none"></div><div><code>        </code><code>FSAL {</code><br \
clear="none"></div><div><code>                </code><code>name = GLUSTER;</code><br \
clear="none"></div><div><code>                </code><code>hostname = \
&quot;localhost&quot;; # IP of one of the nodes in the trusted pool</code><br \
clear="none"></div><div><code>                </code><code>volume = &quot;data&quot;; \
# Volume name. Eg: &quot;test_volume&quot;</code><br clear="none"></div><div><code>   \
</code><code>}</code><br clear="none"></div><div>  <br clear="none"></div><div><code> \
</code><code>Access_type = RW;                     # Access permissions</code><br \
clear="none"></div><div><code>        </code><code>Squash = No_root_squash;       # \
To enable/disable root squashing</code><br clear="none"></div><div><code>        \
</code><code>Disable_ACL = TRUE;                 # To enable/disable ACL</code><br \
clear="none"></div><div><code>        </code><code>Pseudo = &quot;/data&quot;;        \
# NFSv4 pseudo path for this export</code><br clear="none"></div><div><code>        \
</code><code>Protocols = &quot;3&quot;,&quot;4&quot; ;             # NFS protocols \
supported</code><br clear="none"></div><div><code>        </code><code>Transports = \
&quot;UDP&quot;,&quot;TCP&quot; ;   # Transport protocols supported</code><br \
clear="none"></div><div><code>        </code><code>SecType = &quot;sys&quot;;         \
# Security flavors supported</code><br clear="none"></div><div><code>}</code><br \
clear="none"></div></div></div><div><br clear="none"></div><div><br \
clear="none"></div><div><div title="Hint: double-click to select \
code"><div><code>sudo</code> <code>systemctl </code><code>enable</code> <code>--now \
nfs-ganesha</code><br \
clear="none"></div></div></div><div><code>s</code><code>udo</code> <code>vim \
</code><code>/etc/fstab</code> <br clear="none"></div><div><br \
clear="none"></div><div><div><code>localhost:/data                         /data      \
nfs       defaults,_netdev                 0 0</code><br \
clear="none"></div></div><div><br clear="none"></div><div><br \
clear="none"></div><div><div title="Hint: double-click to select \
code"><div><code>sudo</code> <code>systemctl daemon-reload</code><br \
clear="none"></div><div><code>sudo</code> <code>mount</code> <code>-a</code><br \
clear="none"></div></div></div><div><code><br \
clear="none"></code></div><div><div><code>fio --name=</code><code>test</code> \
<code>--filename=</code><code>/data/wow</code> <code>--size=1G \
--readwrite=write</code><br clear="none"></div></div><div><br \
clear="none"></div><div>Run status group 0 (all jobs):<br clear="none">   WRITE: \
bw=2246MiB/s (2355MB/s), 2246MiB/s-2246MiB/s (2355MB/s-2355MB/s), io=1024MiB \
(1074MB), run=456-456msec</div><div><br clear="none"></div><div>Yeah  2355MB/s is \
much better than the original 115MB/s<br clear="none"></div><div><br \
clear="none"></div><div>So in the end, I guess FUSE isn&#39;t the best choice. <br \
clear="none"></div></div></div><br clear="none"><div><div dir="ltr">On Tue, Dec 12, \
2023 at 3:00 PM Gilberto Ferreira &lt;<a rel="nofollow noopener noreferrer" \
shape="rect" href="mailto:gilberto.nunes32@gmail.com" \
target="_blank">gilberto.nunes32@gmail.com</a>&gt; wrote:<br \
clear="none"></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div dir="ltr">Fuse there some overhead.<div>Take \
a look at libgfapi:<br clear="none"></div><div><a rel="nofollow noopener noreferrer" \
shape="rect" href="https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/libgfapi/" \
target="_blank">https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/libgfapi/</a><br \
clear="none"></div><div><br clear="none"></div><div>I know this doc somehow is out of \
date, but could be a hint<br clear="none"></div><div><br clear="none"></div><div><br \
clear="none"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div \
dir="ltr"><div dir="ltr"><div>---<br clear="none"></div><div><div><div>Gilberto Nunes \
Ferreira<br clear="none"></div></div><div><span style="font-size:12.8px">(47) \
99676-7530 - Whatsapp / Telegram</span><br clear="none"></div><div><p \
style="margin:0px"><span style="font-size:12.8px;margin:0px"></span><br \
clear="none"></p><p style="margin:0px"><span style="font-size:12.8px;margin:0px"><br \
clear="none"></span></p><p style="margin:0px"><span \
style="font-size:12.8px;margin:0px"><br clear="none"></span></p></div></div><div><br \
clear="none"></div></div></div></div></div></div></div></div><br \
clear="none"></div></div><br clear="none"><div><div dir="ltr">Em ter., 12 de dez. de \
2023 Ã s 16:29, Danny &lt;<a rel="nofollow noopener noreferrer" shape="rect" \
href="mailto:dbray925%2Bgluster@gmail.com" \
target="_blank">dbray925+gluster@gmail.com</a>&gt; escreveu:<br \
clear="none"></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Nope, not a caching thing. \
I&#39;ve tried multiple different types of fio tests, all produce the same results. \
Gbps when hitting the disks locally, slow MB\s when hitting the Gluster FUSE \
mount.<br clear="none"></div><div><br clear="none"></div><div>I&#39;ve been reading \
up on glustr-ganesha, and will give that a try.<br clear="none"></div></div><br \
clear="none"><div><div dir="ltr">On Tue, Dec 12, 2023 at 1:58 PM Ramon Selga &lt;<a \
rel="nofollow noopener noreferrer" shape="rect" href="mailto:ramon.selga@gmail.com" \
target="_blank">ramon.selga@gmail.com</a>&gt; wrote:<br \
clear="none"></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><u></u><div><span style="font-family:Bk">Dismiss \
                my first question: you have SAS
      12Gbps SSDs   Sorry!</span><br clear="none"> <br clear="none"> <div>El 12/12/23 \
                a les 19:52, Ramon Selga ha
      escrit:<br clear="none"></div><blockquote><span style="font-family:Bk">May ask \
you which kind of disks you have  in this setup? rotational, ssd SAS/SATA, nvme?<br \
clear="none"> <br clear="none"> Is there a RAID controller with writeback caching?<br \
clear="none"> <br clear="none"> It seems to me your fio test on local brick has a \
                unclear result
        due to some caching.<br clear="none"> <br clear="none"> Try something like \
                (you can consider to increase test file size
        depending of your caching memory) :<br clear="none"> <br clear="none"> fio \
                --size=16G --name=test --filename=/gluster/data/brick/wow
        --bs=1M --nrfiles=1 --direct=1 --sync=0 --randrepeat=0
        --rw=write --refill_buffers --end_fsync=1 --iodepth=200
        --ioengine=libaio<br clear="none"> <br clear="none"> </span>Also remember a \
replica 3 arbiter 1 volume writes  synchronously to two data bricks, halving \
                throughput of your
      network backend.<br clear="none"> <br clear="none"> Try similar fio on gluster \
mount but I hardly see more than  300MB/s writing sequentially on only one fuse mount \
even with nvme  backend. On the other side, with 4 to 6 clients, you can easily
      reach 1.5GB/s of aggregate throughput <br clear="none"> <br clear="none"> To \
start, I think is better to try with default parameters for  your replica volume.<br \
clear="none"> <br clear="none"> Best regards!<br clear="none"> <br clear="none"> \
Ramon<br clear="none"> <br clear="none">   <br clear="none"> <div>El 12/12/23 a les \
                19:10, Danny ha
        escrit:<br clear="none"></div><blockquote><div dir="ltr">Sorry, I noticed \
                that too after I posted, so I
          instantly upgraded to 10. Issue remains. <br clear="none"></div><br \
clear="none"><div><div dir="ltr">On Tue, Dec 12, 2023 at  1:09 PM Gilberto Ferreira \
&lt;<a rel="nofollow noopener noreferrer" shape="rect" \
href="mailto:gilberto.nunes32@gmail.com" \
target="_blank">gilberto.nunes32@gmail.com</a>&gt;  wrote:<br \
clear="none"></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div dir="ltr">I strongly suggest you update to \
                version 10
              or higher.  <br clear="none"> It&#39;s come with significant  \
improvement  regarding  performance.<br clear="none"> <div><div dir="ltr"><div \
dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div>---<br \
clear="none"></div><div><div><div>Gilberto Nunes Ferreira<br \
clear="none"></div></div><div><span style="font-size:12.8px">(47)  99676-7530 - \
Whatsapp / Telegram</span><br clear="none"></div><div><p style="margin:0px"><span \
style="font-size:12.8px;margin:0px"><br clear="none"></span></p><p \
style="margin:0px"><span style="font-size:12.8px;margin:0px"><br \
clear="none"></span></p></div></div><div><br \
clear="none"></div></div></div></div></div></div></div></div><br \
clear="none"></div><br clear="none"><div><div dir="ltr">Em ter., 12 de dez. de  2023 \
às 13:03, Danny &lt;<a rel="nofollow noopener noreferrer" shape="rect" \
href="mailto:dbray925%2Bgluster@gmail.com" \
target="_blank">dbray925+gluster@gmail.com</a>&gt;  escreveu:<br \
clear="none"></div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div dir="ltr">MTU is already 9000, and as you can \
see  from the IPERF results, I&#39;ve got a nice, fast
                  connection between the nodes.<br clear="none"></div><br \
clear="none"><div><div dir="ltr">On Tue, Dec 12, 2023  at 9:49 AM Strahil Nikolov \
&lt;<a rel="nofollow noopener noreferrer" shape="rect" \
href="mailto:hunter86_bg@yahoo.com" target="_blank">hunter86_bg@yahoo.com</a>&gt;  \
wrote:<br clear="none"></div><blockquote style="margin:0px 0px 0px \
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>Hi, <div><br \
clear="none"></div><div>Let's try the simple things:<br clear="none"></div><div><br \
clear="none"></div><div>Check if you can use MTU9000 and if it's  possible, set it on \
                the Bond Slaves and the bond
                        devices:<br clear="none"></div><div><span>  ping GLUSTER_PEER \
                </span><span>-c 10
                          -M do -s 8972</span><br clear="none"></div><div><br \
clear="none">Then try to follow up the recommendations from  <a rel="nofollow \
noopener noreferrer" shape="rect" \
href="https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/chap-configuring_red_hat_storage_for_enhancing_performance" \
target="_blank">https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/ \
3.5/html/administration_guide/chap-configuring_red_hat_storage_for_enhancing_performance</a> \
<div><br clear="none"></div><div><br clear="none"></div>Best \
Regards,</div><div>Strahil Nikolov  <br clear="none"> <br clear="none"> <p \
style="padding-top:15px;margin-top:0px"><span style="color:rgb(113,95,250)"><span \
style="font-size:15px;padding-top:15px;margin-top:0px">On  Monday, December 11, 2023, \
3:32 PM, Danny &lt;<a rel="nofollow noopener noreferrer" shape="rect" \
href="mailto:dbray925%2Bgluster@gmail.com" \
target="_blank">dbray925+gluster@gmail.com</a>&gt;  wrote:</span></span><br \
clear="none"></p><blockquote><div \
id="m_587248722840527071yiv0126011018m_-2382815818654791254yiv6351224509x_-238876503m_ \
-5986430612906196748m_5440043245792196623m_3596866081645582176m_-4149661103284099357m_-3283052153129726045m_98609290250261231m_810164770777586547yiv1869372156"><div \
dir="ltr"><div>Hello list, I&#39;m hoping someone can  let me know what setting I \
missed.<br clear="none"></div><div><br clear="none"></div><div>Hardware:<br \
clear="none"></div><div>Dell R650 servers, Dual 24 Core Xeon  2.8 GHz, 1 TB RAM<br \
clear="none"></div><div>8x SSD s <span>Negotiated Speed</span> 12 Gbps<br \
clear="none"></div><div>PERC H755 Controller - RAID 6 <br \
clear="none"></div><div>Created virtual &quot;data&quot; disk from the  above 8 SSD \
drives, for a ~20 TB  /dev/sdb<br clear="none"></div><div><br \
clear="none"></div><div>OS:<br clear="none"></div><div>CentOS Stream<br \
clear="none"></div><div>kernel-4.18.0-526.el8.x86_64<br \
clear="none"></div><div>glusterfs-7.9-1.el8.x86_64<br clear="none"></div><div><br \
clear="none"></div><div>IPERF Test between nodes:<br clear="none"> [ ID] Interval     \
                Transfer      
                                Bitrate             Retr<br clear="none"> [   5]    \
                0.00-10.00   sec   11.5 GBytes
                                  9.90 Gbits/sec      0                   sender<br \
clear="none"> [   5]    0.00-10.04   sec   11.5 GBytes  9.86 Gbits/sec                \
                
                                  receiver<br clear="none"></div><div><br \
clear="none"></div><div>All good there. ~10 Gbps, as  expected.<br \
clear="none"></div><div><br clear="none"></div><div>LVM Install:<br \
clear="none"></div><div>export DISK=&quot;/dev/sdb&quot;<br clear="none"> sudo parted \
--script $DISK &quot;mklabel gpt&quot;<br clear="none"> sudo parted --script $DISK \
                &quot;mkpart
                                primary 0% 100%&quot;<br clear="none"> sudo parted \
                --script $DISK &quot;set 1 lvm
                                on&quot;</div><div>sudo pvcreate --dataalignment 128K
                                /dev/sdb1<br clear="none"> sudo vgcreate \
                --physicalextentsize 128K
                                gfs_vg /dev/sdb1<br clear="none"> sudo lvcreate -L \
                16G -n gfs_pool_meta
                                gfs_vg<br clear="none"> sudo lvcreate -l 95%FREE -n \
                gfs_pool
                                gfs_vg<br clear="none"> sudo lvconvert --chunksize \
                1280K
                                --thinpool gfs_vg/gfs_pool
                                --poolmetadata gfs_vg/gfs_pool_meta<br clear="none"> \
sudo lvchange --zero n gfs_vg/gfs_pool<br clear="none"> sudo lvcreate -V 19.5TiB \
                --thinpool
                                gfs_vg/gfs_pool -n gfs_lv<br clear="none"> sudo \
mkfs.xfs -f -i size=512 -n  size=8192 -d su=128k,sw=10
                                /dev/mapper/gfs_vg-gfs_lv<br clear="none"> sudo vim \
/etc/fstab</div><div>/dev/mapper/gfs_vg-gfs_lv     /gluster/data/brick    xfs         \
                
                                rw,inode64,noatime,nouuid 0 0<br \
                clear="none"></div><div><br clear="none"></div><div>sudo systemctl \
                daemon-reload
                                &amp;&amp; sudo mount -a<br clear="none"> fio \
                --name=test
                                --filename=/gluster/data/brick/wow
                                --size=1G --readwrite=write<br \
clear="none"></div><div><br clear="none"></div><div>Run status group 0 (all jobs):<br \
clear="none">    WRITE: bw=2081MiB/s (2182MB/s),  2081MiB/s-2081MiB/s \
                (2182MB/s-2182MB/s),
                                io=1024MiB (1074MB), run=492-492msec<br \
clear="none"></div><div><br clear="none"></div><div>All good there. 2182MB/s =~ 17.5  \
Gbps. Nice!<br clear="none"></div><div><br clear="none"></div><div><br \
clear="none"></div><div>Gluster install:<br clear="none"></div><div>export \
NODE1=&#39;10.54.95.123&#39;<br clear="none"> export NODE2=&#39;10.54.95.124&#39;<br \
clear="none"> export NODE3=&#39;10.54.95.125&#39;<br clear="none"> sudo gluster peer \
probe $NODE2<br clear="none"> sudo gluster peer probe $NODE3<br clear="none"> sudo \
gluster volume create data replica  3 arbiter 1 $NODE1:/gluster/data/brick
                                $NODE2:/gluster/data/brick
                                $NODE3:/gluster/data/brick force<br clear="none"> \
                sudo gluster volume set data
                                network.ping-timeout 5<br clear="none"> sudo gluster \
                volume set data
                                performance.client-io-threads on<br clear="none"> \
sudo gluster volume set data group  metadata-cache<br clear="none"> sudo gluster \
volume start data<br clear="none"> sudo gluster volume info all<br \
clear="none"></div><div><br clear="none">Volume Name: data<br clear="none"> Type: \
Replicate<br clear="none"> Volume ID:  b52b5212-82c8-4b1a-8db3-52468bc0226e<br \
clear="none"> Status: Started<br clear="none"> Snapshot Count: 0<br clear="none"> \
Number of Bricks: 1 x (2 + 1) = 3<br clear="none"> Transport-type: tcp<br \
clear="none"> Bricks:<br clear="none"> Brick1: 10.54.95.123:/gluster/data/brick<br \
clear="none"> Brick2: 10.54.95.124:/gluster/data/brick<br clear="none"> Brick3: \
10.54.95.125:/gluster/data/brick  (arbiter)<br clear="none"> Options Reconfigured:<br \
clear="none"> network.inode-lru-limit: 200000<br clear="none"> \
performance.md-cache-timeout: 600<br clear="none"> performance.cache-invalidation: \
on<br clear="none"> performance.stat-prefetch: on<br clear="none"> \
features.cache-invalidation-timeout: 600<br clear="none"> \
features.cache-invalidation: on<br clear="none"> network.ping-timeout: 5<br \
clear="none"> transport.address-family: inet<br clear="none"> \
storage.fips-mode-rchecksum: on<br clear="none"> nfs.disable: on<br clear="none"> \
performance.client-io-threads: on</div><div><br clear="none"></div><div>sudo vim \
                /etc/fstab<br clear="none"></div><div>localhost:/data                 \
                /data      
                                                  glusterfs defaults,_netdev   
                                     0 0<br clear="none"></div><div><br \
                clear="none"></div><div>sudo systemctl daemon-reload
                                &amp;&amp; sudo mount -a<br \
                clear="none"></div><div>fio --name=test --filename=/data/wow
                                --size=1G --readwrite=write<br \
clear="none"></div><div><br clear="none"></div><div>Run status group 0 (all jobs):<br \
clear="none">    WRITE: bw=109MiB/s (115MB/s),  109MiB/s-109MiB/s (115MB/s-115MB/s),
                                io=1024MiB (1074MB), run=9366-9366msec</div><div><br \
clear="none"></div><div>Oh no, what&#39;s wrong? From 2182MB/s  down to only 115MB/s? \
What am I missing?  I&#39;m not expecting the above ~17 Gbps,
                                but I&#39;m thinking it should at least be
                                close(r) to ~10 Gbps. <br clear="none"></div><div><br \
clear="none"></div><div>Any suggestions?<br \
clear="none"></div></div></div>________<br clear="none"> <br clear="none"> <br \
clear="none"> <br clear="none"> Community Meeting Calendar:<br clear="none"> <br \
clear="none"> Schedule -<br clear="none"> Every 2nd and 4th Tuesday at 14:30 IST / \
09:00  UTC<br clear="none"> Bridge: <a rel="nofollow noopener noreferrer" \
shape="rect" href="https://meet.google.com/cpu-eiue-hvk" \
target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br clear="none"> \
Gluster-users mailing list<br clear="none"> <a rel="nofollow noopener noreferrer" \
shape="rect" href="mailto:Gluster-users@gluster.org" \
target="_blank">Gluster-users@gluster.org</a><br clear="none"> <a rel="nofollow \
noopener noreferrer" shape="rect" \
href="https://lists.gluster.org/mailman/listinfo/gluster-users" \
target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br \
clear="none"></blockquote></div></div></blockquote></div>________<br clear="none"> \
<br clear="none"> <br clear="none"> <br clear="none"> Community Meeting Calendar:<br \
clear="none"> <br clear="none"> Schedule -<br clear="none"> Every 2nd and 4th Tuesday \
at 14:30 IST / 09:00 UTC<br clear="none"> Bridge: <a rel="nofollow noopener \
noreferrer" shape="rect" href="https://meet.google.com/cpu-eiue-hvk" \
target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br clear="none"> \
Gluster-users mailing list<br clear="none"> <a rel="nofollow noopener noreferrer" \
shape="rect" href="mailto:Gluster-users@gluster.org" \
target="_blank">Gluster-users@gluster.org</a><br clear="none"> <a rel="nofollow \
noopener noreferrer" shape="rect" \
href="https://lists.gluster.org/mailman/listinfo/gluster-users" \
target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br \
clear="none"></blockquote></div></blockquote></div><br clear="none"><pre>________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: <a rel="nofollow noopener noreferrer" shape="rect" \
href="https://meet.google.com/cpu-eiue-hvk" \
target="_blank">https://meet.google.com/cpu-eiue-hvk</a> Gluster-users mailing list
<a rel="nofollow noopener noreferrer" shape="rect" \
href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a> \
<a rel="nofollow noopener noreferrer" shape="rect" \
href="https://lists.gluster.org/mailman/listinfo/gluster-users" \
target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a> <br \
clear="none"></pre></blockquote><br clear="none"></blockquote><br \
clear="none"></div>________<br clear="none"> <br clear="none"> <br clear="none"> <br \
clear="none"> Community Meeting Calendar:<br clear="none"> <br clear="none"> Schedule \
-<br clear="none"> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC<br \
clear="none"> Bridge: <a rel="nofollow noopener noreferrer" shape="rect" \
href="https://meet.google.com/cpu-eiue-hvk" \
target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br clear="none"> \
Gluster-users mailing list<br clear="none"> <a rel="nofollow noopener noreferrer" \
shape="rect" href="mailto:Gluster-users@gluster.org" \
target="_blank">Gluster-users@gluster.org</a><br clear="none"> <a rel="nofollow \
noopener noreferrer" shape="rect" \
href="https://lists.gluster.org/mailman/listinfo/gluster-users" \
target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br \
clear="none"></blockquote></div>________<br clear="none"> <br clear="none"> <br \
clear="none"> <br clear="none"> Community Meeting Calendar:<br clear="none"> <br \
clear="none"> Schedule -<br clear="none"> Every 2nd and 4th Tuesday at 14:30 IST / \
09:00 UTC<br clear="none"> Bridge: <a rel="nofollow noopener noreferrer" shape="rect" \
href="https://meet.google.com/cpu-eiue-hvk" \
target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br clear="none"> \
Gluster-users mailing list<br clear="none"> <a rel="nofollow noopener noreferrer" \
shape="rect" href="mailto:Gluster-users@gluster.org" \
target="_blank">Gluster-users@gluster.org</a><br clear="none"> <a rel="nofollow \
noopener noreferrer" shape="rect" \
href="https://lists.gluster.org/mailman/listinfo/gluster-users" \
target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br \
clear="none"></blockquote></div></blockquote></div>________<br clear="none"> <br \
clear="none"> <br clear="none"> <br clear="none"> Community Meeting Calendar:<br \
clear="none"> <br clear="none"> Schedule -<br clear="none"> Every 2nd and 4th Tuesday \
at 14:30 IST / 09:00 UTC<br clear="none"> Bridge: <a rel="nofollow noopener \
noreferrer" shape="rect" href="https://meet.google.com/cpu-eiue-hvk" \
target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br clear="none"> \
Gluster-users mailing list<br clear="none"> <a rel="nofollow noopener noreferrer" \
shape="rect" href="mailto:Gluster-users@gluster.org" \
target="_blank">Gluster-users@gluster.org</a><br clear="none"> <a rel="nofollow \
noopener noreferrer" shape="rect" \
href="https://lists.gluster.org/mailman/listinfo/gluster-users" \
target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br \
clear="none"></blockquote></blockquote></div></div><br \
clear="none"></div></div>________<br clear="none"><br clear="none"><br \
clear="none"><br clear="none">Community Meeting Calendar:<br clear="none"><br \
clear="none">Schedule -<br clear="none">Every 2nd and 4th Tuesday at 14:30 IST / \
09:00 UTC<br clear="none">Bridge: <a rel="nofollow noopener noreferrer" shape="rect" \
href="https://meet.google.com/cpu-eiue-hvk" \
target="_blank">https://meet.google.com/cpu-eiue-hvk</a><br \
clear="none">Gluster-users mailing list<br clear="none"><a rel="nofollow noopener \
noreferrer" shape="rect" href="mailto:Gluster-users@gluster.org" \
target="_blank">Gluster-users@gluster.org</a><br clear="none"><a rel="nofollow \
noopener noreferrer" shape="rect" \
href="https://lists.gluster.org/mailman/listinfo/gluster-users" \
target="_blank">https://lists.gluster.org/mailman/listinfo/gluster-users</a><br \
clear="none"><blockquote></blockquote></blockquote></div> \
</div></blockquote></div></div> \
</div></div><blockquote></blockquote></blockquote></div> </div></blockquote></div>



________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic