[prev in list] [next in list] [prev in thread] [next in thread] 

List:       gluster-users
Subject:    Re: [Gluster-users] df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
From:       Jose_V._Carrión <jocarbur () gmail ! com>
Date:       2018-02-28 17:01:26
Message-ID: CAGaO=PUF2Awo9xDx89m4TsdMaEutBcYfcFVH_R6cNPAE=RC05A () mail ! gmail ! com
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


Hi Nithya,

My initial setup was composed of 2 similar nodes: stor1data and stor2data.
A month ago I expanded both volumes with a new node: stor3data (2 bricks
per volume).
Of course, then to add the new peer with the bricks I did the 'balance
force' operation. This task finished successfully (you can see info below)
and number of files on the 3 nodes were very similar .

For volumedisk1 I only have files of 500MB and they are continuosly written
in sequential mode. The filename pattern of written files is:

run.node1.0000.rd
run.node2.0000.rd
run.node1.0001.rd
run.node2.0001.rd
run.node1.0002.rd
run.node2.0002.rd
...........
...........
run.node1.X.rd
run.node2.X.rd

(  X ranging from 0000 to infinite )

Curiously stor1data and stor2data maintain similar ratios in bytes:

Filesystem              1K-blocks        Used               Available
Use% Mounted on
/dev/sdc1             52737613824 17079174264  35658439560  33%
/mnt/glusterfs/vol1   -> stor1data
/dev/sdc1             52737613824 17118810848  35618802976  33%
/mnt/glusterfs/vol1  ->  stor2data

However the ratio on som3data differs too much (1TB):
Filesystem           1K-blocks        Used                Available
Use% Mounted on
/dev/sdc1             52737613824 15479191748  37258422076  30%
/mnt/disk_c/glusterfs/vol1 -> stor3data
/dev/sdd1             52737613824 15566398604  37171215220  30%
/mnt/disk_d/glusterfs/vol1 -> stor3data

Thinking in  inodes:

Filesystem                Inodes       IUsed       IFree          IUse%
Mounted on
/dev/sdc1             5273970048  851053  5273118995    1%
/mnt/glusterfs/vol1 ->  stor1data
/dev/sdc1             5273970048  849388  5273120660    1%
/mnt/glusterfs/vol1 ->  stor2data

/dev/sdc1             5273970048  846877  5273123171    1%
/mnt/disk_c/glusterfs/vol1 -> stor3data
/dev/sdd1             5273970048  845250  5273124798    1%
/mnt/disk_d/glusterfs/vol1 -> stor3data

851053 (stor1) - 845250 (stor3) = 5803 files of difference !

In adition, correct me if I'm wrong,  stor3data should have 50% of
probability to store a new file (even taking into account the algorithm of
DHT with filename patterns)

Thanks,
Greetings.

Jose V.

Status of volume: volumedisk0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick stor1data:/mnt/glusterfs/vol0/bri
ck1                                         49152     0          Y
13533
Brick stor2data:/mnt/glusterfs/vol0/bri
ck1                                         49152     0          Y
13302
Brick stor3data:/mnt/disk_b1/glusterfs/
vol0/brick1                                 49152     0          Y
17371
Brick stor3data:/mnt/disk_b2/glusterfs/
vol0/brick1                                 49153     0          Y
17391
NFS Server on localhost                     N/A       N/A        N
N/A
NFS Server on stor3data                 N/A       N/A        N       N/A
NFS Server on stor2data                 N/A       N/A        N       N/A

Task Status of Volume volumedisk0
------------------------------------------------------------------------------
Task                 : Rebalance
ID                   : 7f5328cb-ed25-4627-9196-fb3e29e0e4ca
Status               : completed

Status of volume: volumedisk1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick stor1data:/mnt/glusterfs/vol1/bri
ck1                                         49153     0          Y
13579
Brick stor2data:/mnt/glusterfs/vol1/bri
ck1                                         49153     0          Y
13344
Brick stor3data:/mnt/disk_c/glusterfs/v
ol1/brick1                                  49154     0          Y
17439
Brick stor3data:/mnt/disk_d/glusterfs/v
ol1/brick1                                  49155     0          Y
17459
NFS Server on localhost                     N/A       N/A        N
N/A
NFS Server on stor3data                 N/A       N/A        N       N/A
NFS Server on stor2data                 N/A       N/A        N       N/A

Task Status of Volume volumedisk1
------------------------------------------------------------------------------
Task                 : Rebalance
ID                   : d0048704-beeb-4a6a-ae94-7e7916423fd3
Status               : completed


2018-02-28 15:40 GMT+01:00 Nithya Balachandran <nbalacha@redhat.com>:

> Hi Jose,
>
> On 28 February 2018 at 18:28, Jose V. Carrión <jocarbur@gmail.com> wrote:
>
>> Hi Nithya,
>>
>> I applied the workarround for this bug and now df shows the right size:
>>
>> That is good to hear.
>
>
>
>> [root@stor1 ~]# df -h
>> Filesystem            Size  Used Avail Use% Mounted on
>> /dev/sdb1              26T  1,1T   25T   4% /mnt/glusterfs/vol0
>> /dev/sdc1              50T   16T   34T  33% /mnt/glusterfs/vol1
>> stor1data:/volumedisk0
>>                       101T  3,3T   97T   4% /volumedisk0
>> stor1data:/volumedisk1
>>                       197T   61T  136T  31% /volumedisk1
>>
>>
>> [root@stor2 ~]# df -h
>> Filesystem            Size  Used Avail Use% Mounted on
>> /dev/sdb1              26T  1,1T   25T   4% /mnt/glusterfs/vol0
>> /dev/sdc1              50T   16T   34T  33% /mnt/glusterfs/vol1
>> stor2data:/volumedisk0
>>                       101T  3,3T   97T   4% /volumedisk0
>> stor2data:/volumedisk1
>>                       197T   61T  136T  31% /volumedisk1
>>
>>
>> [root@stor3 ~]# df -h
>> Filesystem            Size  Used Avail Use% Mounted on
>> /dev/sdb1              25T  638G   24T   3% /mnt/disk_b1/glusterfs/vol0
>> /dev/sdb2              25T  654G   24T   3% /mnt/disk_b2/glusterfs/vol0
>> /dev/sdc1              50T   15T   35T  30% /mnt/disk_c/glusterfs/vol1
>> /dev/sdd1              50T   15T   35T  30% /mnt/disk_d/glusterfs/vol1
>> stor3data:/volumedisk0
>>                       101T  3,3T   97T   4% /volumedisk0
>> stor3data:/volumedisk1
>>                       197T   61T  136T  31% /volumedisk1
>>
>>
>> However I'm concerned because, as you can see, the volumedisk0 on
>> stor3data is composed by 2 bricks on thesame disk but on different
>> partitions (/dev/sdb1 and /dev/sdb2).
>> After to aplly the workarround, the  shared-brick-count parameter was
>> setted to 1 in all the bricks and all the servers (see below). Could be
>> this an issue ?
>>
>> No, this is correct. The shared-brick-count will be > 1 only if multiple
> bricks share the same partition.
>
>
>
>> Also, I can check that stor3data is now unbalanced respect stor1data and
>> stor2data. The three nodes have the same size of brick but stor3data bricks
>> have used 1TB less than stor1data and stor2data:
>>
>
>
> This does not necessarily indicate a problem. The distribution need not be
> exactly equal and depends on the filenames. Can you provide more
> information on the kind of dataset (how many files, sizes etc) on this
> volume? Did you create the volume with all 4 bricks or add some later?
>
> Regards,
> Nithya
>
>>
>> stor1data:
>> /dev/sdb1              26T  1,1T   25T   4% /mnt/glusterfs/vol0
>> /dev/sdc1              50T   16T   34T  33% /mnt/glusterfs/vol1
>>
>> stor2data bricks:
>> /dev/sdb1              26T  1,1T   25T   4% /mnt/glusterfs/vol0
>> /dev/sdc1              50T   16T   34T  33% /mnt/glusterfs/vol1
>>
>> stor3data bricks:
>>   /dev/sdb1              25T  638G   24T   3% /mnt/disk_b1/glusterfs/vol0
>>   /dev/sdb2              25T  654G   24T   3% /mnt/disk_b2/glusterfs/vol0
>>    dev/sdc1              50T   15T   35T  30% /mnt/disk_c/glusterfs/vol1
>>    /dev/sdd1             50T   15T   35T  30% /mnt/disk_d/glusterfs/vol1
>>
>>
>> [root@stor1 ~]# grep -n "share" /var/lib/glusterd/vols/volumedisk1/*
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.
>> mnt-glusterfs-vol1-brick1.vol:3:    option shared-brick-count 1
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.
>> mnt-glusterfs-vol1-brick1.vol.rpmsave:3:    option shared-brick-count 1
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.
>> mnt-glusterfs-vol1-brick1.vol:3:    option shared-brick-count 1
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.
>> mnt-glusterfs-vol1-brick1.vol.rpmsave:3:    option shared-brick-count 0
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.
>> mnt-disk_c-glusterfs-vol1-brick1.vol:3:    option shared-brick-count 1
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.
>> mnt-disk_c-glusterfs-vol1-brick1.vol.rpmsave:3:    option
>> shared-brick-count 0
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.
>> mnt-disk_d-glusterfs-vol1-brick1.vol:3:    option shared-brick-count 1
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.
>> mnt-disk_d-glusterfs-vol1-brick1.vol.rpmsave:3:    option
>> shared-brick-count 0
>>
>> [root@stor2 ~]# grep -n "share" /var/lib/glusterd/vols/volumedisk1/*
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.
>> mnt-glusterfs-vol1-brick1.vol:3:    option shared-brick-count 1
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.
>> mnt-glusterfs-vol1-brick1.vol.rpmsave:3:    option shared-brick-count 0
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.
>> mnt-glusterfs-vol1-brick1.vol:3:    option shared-brick-count 1
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.
>> mnt-glusterfs-vol1-brick1.vol.rpmsave:3:    option shared-brick-count 1
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.
>> mnt-disk_c-glusterfs-vol1-brick1.vol:3:    option shared-brick-count 1
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.
>> mnt-disk_c-glusterfs-vol1-brick1.vol.rpmsave:3:    option
>> shared-brick-count 0
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.
>> mnt-disk_d-glusterfs-vol1-brick1.vol:3:    option shared-brick-count 1
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.
>> mnt-disk_d-glusterfs-vol1-brick1.vol.rpmsave:3:    option
>> shared-brick-count 0
>>
>> [root@stor3t ~]# grep -n "share" /var/lib/glusterd/vols/volumedisk1/*
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.
>> mnt-glusterfs-vol1-brick1.vol:3:    option shared-brick-count 1
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.
>> mnt-glusterfs-vol1-brick1.vol.rpmsave:3:    option shared-brick-count 1
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.
>> mnt-glusterfs-vol1-brick1.vol:3:    option shared-brick-count 1
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.
>> mnt-glusterfs-vol1-brick1.vol.rpmsave:3:    option shared-brick-count 0
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.
>> mnt-disk_c-glusterfs-vol1-brick1.vol:3:    option shared-brick-count 1
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.
>> mnt-disk_c-glusterfs-vol1-brick1.vol.rpmsave:3:    option
>> shared-brick-count 0
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.
>> mnt-disk_d-glusterfs-vol1-brick1.vol:3:    option shared-brick-count 1
>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.
>> mnt-disk_d-glusterfs-vol1-brick1.vol.rpmsave:3:    option
>> shared-brick-count 0
>>
>> Thaks for your help,
>> Greetings.
>>
>> Jose V.
>>
>>
>> 2018-02-28 5:07 GMT+01:00 Nithya Balachandran <nbalacha@redhat.com>:
>>
>>> Hi Jose,
>>>
>>> There is a known issue with gluster 3.12.x builds (see [1]) so you may
>>> be running into this.
>>>
>>> The "shared-brick-count" values seem fine on stor1. Please send us "grep
>>> -n "share" /var/lib/glusterd/vols/volumedisk1/*" results for the other
>>> nodes so we can check if they are the cause.
>>>
>>>
>>> Regards,
>>> Nithya
>>>
>>>
>>>
>>> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
>>>
>>> On 28 February 2018 at 03:03, Jose V. Carrión <jocarbur@gmail.com>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> Some days ago all my glusterfs configuration was working fine. Today I
>>>> realized that the total size reported by df command was changed and is
>>>> smaller than the aggregated capacity of all the bricks in the volume.
>>>>
>>>> I checked that all the volumes status are fine, all the glusterd
>>>> daemons are running, there is no error in logs,  however df shows a bad
>>>> total size.
>>>>
>>>> My configuration for one volume: volumedisk1
>>>> [root@stor1 ~]# gluster volume status volumedisk1  detail
>>>>
>>>> Status of volume: volumedisk1
>>>> ------------------------------------------------------------
>>>> ------------------
>>>> Brick                : Brick stor1data:/mnt/glusterfs/vol1/brick1
>>>> TCP Port             : 49153
>>>> RDMA Port            : 0
>>>> Online               : Y
>>>> Pid                  : 13579
>>>> File System          : xfs
>>>> Device               : /dev/sdc1
>>>> Mount Options        : rw,noatime
>>>> Inode Size           : 512
>>>> Disk Space Free      : 35.0TB
>>>> Total Disk Space     : 49.1TB
>>>> Inode Count          : 5273970048
>>>> Free Inodes          : 5273123069
>>>> ------------------------------------------------------------
>>>> ------------------
>>>> Brick                : Brick stor2data:/mnt/glusterfs/vol1/brick1
>>>> TCP Port             : 49153
>>>> RDMA Port            : 0
>>>> Online               : Y
>>>> Pid                  : 13344
>>>> File System          : xfs
>>>> Device               : /dev/sdc1
>>>> Mount Options        : rw,noatime
>>>> Inode Size           : 512
>>>> Disk Space Free      : 35.0TB
>>>> Total Disk Space     : 49.1TB
>>>> Inode Count          : 5273970048
>>>> Free Inodes          : 5273124718
>>>> ------------------------------------------------------------
>>>> ------------------
>>>> Brick                : Brick stor3data:/mnt/disk_c/glusterf
>>>> s/vol1/brick1
>>>> TCP Port             : 49154
>>>> RDMA Port            : 0
>>>> Online               : Y
>>>> Pid                  : 17439
>>>> File System          : xfs
>>>> Device               : /dev/sdc1
>>>> Mount Options        : rw,noatime
>>>> Inode Size           : 512
>>>> Disk Space Free      : 35.7TB
>>>> Total Disk Space     : 49.1TB
>>>> Inode Count          : 5273970048
>>>> Free Inodes          : 5273125437
>>>> ------------------------------------------------------------
>>>> ------------------
>>>> Brick                : Brick stor3data:/mnt/disk_d/glusterf
>>>> s/vol1/brick1
>>>> TCP Port             : 49155
>>>> RDMA Port            : 0
>>>> Online               : Y
>>>> Pid                  : 17459
>>>> File System          : xfs
>>>> Device               : /dev/sdd1
>>>> Mount Options        : rw,noatime
>>>> Inode Size           : 512
>>>> Disk Space Free      : 35.6TB
>>>> Total Disk Space     : 49.1TB
>>>> Inode Count          : 5273970048
>>>> Free Inodes          : 5273127036
>>>>
>>>>
>>>> Then full size for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB
>>>> +49.1TB = *196,4 TB  *but df shows:
>>>>
>>>> [root@stor1 ~]# df -h
>>>> Filesystem            Size  Used Avail Use% Mounted on
>>>> /dev/sda2              48G   21G   25G  46% /
>>>> tmpfs                  32G   80K   32G   1% /dev/shm
>>>> /dev/sda1             190M   62M  119M  35% /boot
>>>> /dev/sda4             395G  251G  124G  68% /data
>>>> /dev/sdb1              26T  601G   25T   3% /mnt/glusterfs/vol0
>>>> /dev/sdc1              50T   15T   36T  29% /mnt/glusterfs/vol1
>>>> stor1data:/volumedisk0
>>>>                        76T  1,6T   74T   3% /volumedisk0
>>>> stor1data:/volumedisk1
>>>>                       *148T*   42T  106T  29% /volumedisk1
>>>>
>>>> Exactly 1 brick minus: 196,4 TB - 49,1TB = 148TB
>>>>
>>>> It's a production system so I hope you can help me.
>>>>
>>>> Thanks in advance.
>>>>
>>>> Jose V.
>>>>
>>>>
>>>> Below some other data of my configuration:
>>>>
>>>> [root@stor1 ~]# gluster volume info
>>>>
>>>> Volume Name: volumedisk0
>>>> Type: Distribute
>>>> Volume ID: 0ee52d94-1131-4061-bcef-bd8cf898da10
>>>> Status: Started
>>>> Snapshot Count: 0
>>>> Number of Bricks: 4
>>>> Transport-type: tcp
>>>> Bricks:
>>>> Brick1: stor1data:/mnt/glusterfs/vol0/brick1
>>>> Brick2: stor2data:/mnt/glusterfs/vol0/brick1
>>>> Brick3: stor3data:/mnt/disk_b1/glusterfs/vol0/brick1
>>>> Brick4: stor3data:/mnt/disk_b2/glusterfs/vol0/brick1
>>>> Options Reconfigured:
>>>> performance.cache-size: 4GB
>>>> cluster.min-free-disk: 1%
>>>> performance.io-thread-count: 16
>>>> performance.readdir-ahead: on
>>>>
>>>> Volume Name: volumedisk1
>>>> Type: Distribute
>>>> Volume ID: 591b7098-800e-4954-82a9-6b6d81c9e0a2
>>>> Status: Started
>>>> Snapshot Count: 0
>>>> Number of Bricks: 4
>>>> Transport-type: tcp
>>>> Bricks:
>>>> Brick1: stor1data:/mnt/glusterfs/vol1/brick1
>>>> Brick2: stor2data:/mnt/glusterfs/vol1/brick1
>>>> Brick3: stor3data:/mnt/disk_c/glusterfs/vol1/brick1
>>>> Brick4: stor3data:/mnt/disk_d/glusterfs/vol1/brick1
>>>> Options Reconfigured:
>>>> cluster.min-free-inodes: 6%
>>>> performance.cache-size: 4GB
>>>> cluster.min-free-disk: 1%
>>>> performance.io-thread-count: 16
>>>> performance.readdir-ahead: on
>>>>
>>>> [root@stor1 ~]# grep -n "share" /var/lib/glusterd/vols/volumedisk1/*
>>>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt
>>>> -glusterfs-vol1-brick1.vol:3:    option shared-brick-count 1
>>>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor1data.mnt
>>>> -glusterfs-vol1-brick1.vol.rpmsave:3:    option shared-brick-count 1
>>>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.mnt
>>>> -glusterfs-vol1-brick1.vol:3:    option shared-brick-count 0
>>>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor2data.mnt
>>>> -glusterfs-vol1-brick1.vol.rpmsave:3:    option shared-brick-count 0
>>>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt
>>>> -disk_c-glusterfs-vol1-brick1.vol:3:    option shared-brick-count 0
>>>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt
>>>> -disk_c-glusterfs-vol1-brick1.vol.rpmsave:3:    option
>>>> shared-brick-count 0
>>>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt
>>>> -disk_d-glusterfs-vol1-brick1.vol:3:    option shared-brick-count 0
>>>> /var/lib/glusterd/vols/volumedisk1/volumedisk1.stor3data.mnt
>>>> -disk_d-glusterfs-vol1-brick1.vol.rpmsave:3:    option
>>>> shared-brick-count 0
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users@gluster.org
>>>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>>>
>>>
>>>
>>
>

[Attachment #5 (text/html)]

<div dir="ltr">Hi Nithya,<div><br></div><div>My initial setup was composed of 2 \
similar nodes: stor1data and stor2data. A month ago I expanded both volumes with a \
new node: stor3data (2 bricks per volume).</div><div>Of course, then to add the new \
peer with the bricks I did the &#39;balance force&#39; operation. This task finished \
successfully (you can see info below) and number of files on the 3 nodes were very \
similar .</div><div><br></div><div>For volumedisk1 I only have files of 500MB and \
they are continuosly written in sequential mode. The filename pattern of written \
files is:</div><div><br></div><div>run.node1.0000.rd  </div><div>run.node2.0000.rd   \
</div><div><div>run.node1.0001.rd  </div><div>run.node2.0001.rd   \
</div></div><div><div>run.node1.0002.rd  </div><div>run.node2.0002.rd   \
</div></div><div>...........</div><div>...........</div><div><div>run.node1.X.rd  \
</div><div>run.node2.X.rd   </div></div><div><br></div><div>(   X ranging from 0000 \
to infinite )</div><div><br></div><div>Curiously stor1data and stor2data maintain \
similar ratios in bytes:</div><div><br></div><div>Filesystem                     \
1K-blocks            Used                      Available       Use% Mounted \
on<br></div><div>/dev/sdc1                   52737613824 17079174264   35658439560   \
33% /mnt/glusterfs/vol1    -&gt; stor1data</div><div>/dev/sdc1                   \
52737613824 17118810848   35618802976   33% /mnt/glusterfs/vol1   -&gt;   \
stor2data<br></div><div><br></div><div>However the ratio on som3data differs too much \
(1TB):</div><div>Filesystem                1K-blocks            Used                  \
Available          Use% Mounted on<br></div><div><div>/dev/sdc1                   \
52737613824 15479191748   37258422076   30% /mnt/disk_c/glusterfs/vol1 -&gt; \
stor3data</div><div>/dev/sdd1                   52737613824 15566398604   37171215220 \
30% /mnt/disk_d/glusterfs/vol1 -&gt; \
stor3data</div></div><div><br></div><div>Thinking in   \
inodes:</div><div><div><br></div><div>Filesystem                        Inodes        \
IUsed          IFree               IUse% Mounted on</div><div>/dev/sdc1               \
5273970048   851053   5273118995      1% /mnt/glusterfs/vol1 -&gt;   \
stor1data<br></div><div>/dev/sdc1                   5273970048   849388   5273120660  \
1% /mnt/glusterfs/vol1 -&gt;   \
stor2data<br></div></div><div><div><br></div><div>/dev/sdc1                   \
5273970048   846877   5273123171      1% /mnt/disk_c/glusterfs/vol1 -&gt; \
stor3data</div><div>/dev/sdd1                   5273970048   845250   5273124798      \
1% /mnt/disk_d/glusterfs/vol1 -&gt; stor3data</div></div><div><br></div><div>851053 \
(stor1) - 845250 (stor3) =  5803 files of difference !</div><div><br></div><div>In \
adition, correct me if I&#39;m wrong,   stor3data should have 50% of probability to \
store a new file (even taking into account the algorithm of DHT with filename \
patterns)</div><div><br></div><div>Thanks,</div><div>Greetings.</div><div><br></div><div>Jose \
V.</div><div><br></div><div><div>Status of volume: volumedisk0</div><div>Gluster \
process                                           TCP Port   RDMA Port   Online   \
Pid</div><div>------------------------------------------------------------------------------</div><div>Brick \
stor1data:/mnt/glusterfs/vol0/bri</div><div>ck1                                       \
49152       0               Y          13533</div><div>Brick \
stor2data:/mnt/glusterfs/vol0/bri</div><div>ck1                                       \
49152       0               Y          13302</div><div>Brick \
stor3data:/mnt/disk_b1/glusterfs/</div><div>vol0/brick1                               \
49152       0               Y          17371</div><div>Brick \
stor3data:/mnt/disk_b2/glusterfs/</div><div>vol0/brick1                               \
49153       0               Y          17391</div><div>NFS Server on localhost        \
N/A          N/A            N          N/A   </div><div>NFS Server on stor3data       \
N/A          N/A            N          N/A   </div><div>NFS Server on stor2data       \
N/A          N/A            N          N/A   </div><div>  </div><div>Task Status of \
Volume volumedisk0</div><div>------------------------------------------------------------------------------</div><div>Task \
: Rebalance                 </div><div>ID                            : \
7f5328cb-ed25-4627-9196-fb3e29e0e4ca</div><div>Status                      : \
completed                 </div><div>  </div><div>Status of volume: \
volumedisk1</div><div>Gluster process                                           TCP \
Port   RDMA Port   Online   \
Pid</div><div>------------------------------------------------------------------------------</div><div>Brick \
stor1data:/mnt/glusterfs/vol1/bri</div><div>ck1                                       \
49153       0               Y          13579</div><div>Brick \
stor2data:/mnt/glusterfs/vol1/bri</div><div>ck1                                       \
49153       0               Y          13344</div><div>Brick \
stor3data:/mnt/disk_c/glusterfs/v</div><div>ol1/brick1                                \
49154       0               Y          17439</div><div>Brick \
stor3data:/mnt/disk_d/glusterfs/v</div><div>ol1/brick1                                \
49155       0               Y          17459</div><div>NFS Server on localhost        \
N/A          N/A            N          N/A   </div><div>NFS Server on stor3data       \
N/A          N/A            N          N/A   </div><div>NFS Server on stor2data       \
N/A          N/A            N          N/A   </div><div>  </div><div>Task Status of \
Volume volumedisk1</div><div>------------------------------------------------------------------------------</div><div>Task \
: Rebalance                 </div><div>ID                            : \
d0048704-beeb-4a6a-ae94-7e7916423fd3</div><div>Status                      : \
completed  </div></div><div><br></div><div class="gmail_extra"><br><div \
class="gmail_quote">2018-02-28 15:40 GMT+01:00 Nithya Balachandran <span \
dir="ltr">&lt;<a href="mailto:nbalacha@redhat.com" \
target="_blank">nbalacha@redhat.com</a>&gt;</span>:<br><blockquote \
class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi Jose,<br><div \
class="gmail_extra"><br><div class="gmail_quote"><span class="gmail-">On 28 February \
2018 at 18:28, Jose V. Carrión <span dir="ltr">&lt;<a \
href="mailto:jocarbur@gmail.com" target="_blank">jocarbur@gmail.com</a>&gt;</span> \
wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px \
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Hi \
Nithya,<div><br></div><div>I applied the workarround for this bug and now df shows \
the right size:</div><div><span><div><br></div></span></div></div></blockquote></span><div>That \
is good to hear.</div><div><div class="gmail-h5"><div><br></div><div>  \
</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px \
solid rgb(204,204,204);padding-left:1ex"><div \
dir="ltr"><div><span><div></div><div>[root@stor1 ~]# df -h</div><div>Filesystem       \
Size   Used Avail Use% Mounted on</div></span><div>/dev/sdb1                     26T  \
1,1T    25T    4% /mnt/glusterfs/vol0<br></div><div>/dev/sdc1                     50T \
16T    34T   33% /mnt/glusterfs/vol1</div><div>stor1data:/volumedisk0</div><div>      \
101T   3,3T    97T    4% /volumedisk0</div><div>stor1data:/volumedisk1</div><div>     \
197T    61T   136T   31% \
/volumedisk1</div></div><div><br></div><div><div><br></div><div>[root@stor2 ~]# df \
-h</div><span><div>Filesystem                  Size   Used Avail Use% Mounted \
on</div></span><div>/dev/sdb1                     26T   1,1T    25T    4% \
/mnt/glusterfs/vol0<br></div><div>/dev/sdc1                     50T    16T    34T   \
33% /mnt/glusterfs/vol1</div><div>stor2data:/volumedisk0</div><div>                   \
101T   3,3T    97T    4% /volumedisk0</div><div>stor2data:/volumedisk1</div><div>     \
197T    61T   136T   31% \
/volumedisk1</div></div><div><br></div><div><br></div><div><div>[root@stor3 ~]# df \
-h</div><span><div>Filesystem                  Size   Used Avail Use% Mounted \
on</div></span><div>/dev/sdb1                     25T   638G    24T    3% \
/mnt/disk_b1/glusterfs/vol0<br></div><div>/dev/sdb2                     25T   654G    \
24T    3% /mnt/disk_b2/glusterfs/vol0</div><div>/dev/sdc1                     50T    \
15T    35T   30% /mnt/disk_c/glusterfs/vol1</div><div>/dev/sdd1                     \
50T    15T    35T   30% \
/mnt/disk_d/glusterfs/vol1</div><div>stor3data:/volumedisk0</div><div>                \
101T   3,3T    97T    4% /volumedisk0</div><div>stor3data:/volumedisk1</div><div>     \
197T    61T   136T   31% \
/volumedisk1</div></div><div><br></div><div><br></div><div>However I&#39;m concerned \
because, as you can see, the volumedisk0 on stor3data is composed by 2 bricks on \
thesame disk but on different partitions (/dev/sdb1 and /dev/sdb2).</div><div>After \
to aplly the workarround, the   shared-brick-count parameter was setted to 1 in all \
the bricks and all the servers (see below). Could be this an issue \
?</div><div><br></div></div></blockquote></div></div><div>No, this is correct. The \
shared-brick-count will be &gt; 1 only if multiple bricks share the same \
partition.</div><span class="gmail-"><div><br></div><div>  </div><blockquote \
class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid \
rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div></div><div>Also, I can check \
that stor3data is now unbalanced respect stor1data and stor2data. The three nodes \
have the same size of brick but stor3data bricks have used 1TB less than stor1data \
and stor2data:</div></div></blockquote><div><br></div><div><br></div></span><div>This \
does not necessarily indicate a problem. The distribution need not be exactly equal \
and depends on the filenames. Can you provide more information on the kind of dataset \
(how many files, sizes etc) on this volume? Did you create the volume with all 4 \
bricks or add some later?</div><div><br></div><div>Regards,</div><div>Nithya</div><div><div \
class="gmail-h5"><blockquote class="gmail_quote" style="margin:0px 0px 0px \
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div \
dir="ltr"><div><br></div><div><div><div>stor1data:<br></div><div><div>/dev/sdb1       \
26T   1,1T    25T    4% /mnt/glusterfs/vol0<br></div><div>/dev/sdc1                   \
50T    16T    34T   33% \
/mnt/glusterfs/vol1</div></div><div><br></div><div><div>stor2data \
bricks:</div></div><div><div>/dev/sdb1                     26T   1,1T    25T    4% \
/mnt/glusterfs/vol0<br></div><div>/dev/sdc1                     50T    16T    34T   \
33% /mnt/glusterfs/vol1</div></div><div><br></div><div>stor3data \
bricks:</div></div><div><div>   /dev/sdb1                     25T   638G    24T    3% \
/mnt/disk_b1/glusterfs/vol0<br></div><div>   /dev/sdb2                     25T   654G \
24T    3% /mnt/disk_b2/glusterfs/vol0</div></div><div>     dev/sdc1                   \
50T    15T    35T   30% /mnt/disk_c/glusterfs/vol1<br></div><div>     /dev/sdd1       \
50T    15T    35T   30% \
/mnt/disk_d/glusterfs/vol1</div></div><div><span><div><br></div><div><br></div><div>[root@stor1 \
~]# grep -n &quot;share&quot; \
/var/lib/glusterd/vols/volumed<wbr>isk1/*</div><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.<wbr>mnt-glusterfs-vol1-brick1.vol:<wbr>3: \
option shared-brick-count \
1</div><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.<wbr>mnt-glusterfs-vol1-brick1.vol.<wbr>rpmsave:3: \
option shared-brick-count \
1</div></span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.<wbr>mnt-glusterfs-vol1-brick1.vol:<wbr>3: \
option shared-brick-count \
1</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.<wbr>mnt-glusterfs-vol1-brick1.vol.<wbr>rpmsave:3: \
option shared-brick-count \
0</div></span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.<wbr>mnt-disk_c-glusterfs-vol1-<wbr>brick1.vol:3: \
option shared-brick-count \
1</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.<wbr>mnt-disk_c-glusterfs-vol1-<wbr>brick1.vol.rpmsave:3: \
option shared-brick-count \
0</div></span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.<wbr>mnt-disk_d-glusterfs-vol1-<wbr>brick1.vol:3: \
option shared-brick-count \
1</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.<wbr>mnt-disk_d-glusterfs-vol1-<wbr>brick1.vol.rpmsave:3: \
option shared-brick-count 0</div></span></div><div><br></div><div><div>[root@stor2 \
~]# grep -n &quot;share&quot; \
/var/lib/glusterd/vols/volumed<wbr>isk1/*</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.<wbr>mnt-glusterfs-vol1-brick1.vol:<wbr>3: \
option shared-brick-count \
1</div></span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.<wbr>mnt-glusterfs-vol1-brick1.vol.<wbr>rpmsave:3: \
option shared-brick-count \
0</div><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.<wbr>mnt-glusterfs-vol1-brick1.vol:<wbr>3: \
option shared-brick-count \
1</div><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.<wbr>mnt-glusterfs-vol1-brick1.vol.<wbr>rpmsave:3: \
option shared-brick-count \
1</div><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.<wbr>mnt-disk_c-glusterfs-vol1-<wbr>brick1.vol:3: \
option shared-brick-count \
1</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.<wbr>mnt-disk_c-glusterfs-vol1-<wbr>brick1.vol.rpmsave:3: \
option shared-brick-count \
0</div></span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.<wbr>mnt-disk_d-glusterfs-vol1-<wbr>brick1.vol:3: \
option shared-brick-count \
1</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.<wbr>mnt-disk_d-glusterfs-vol1-<wbr>brick1.vol.rpmsave:3: \
option shared-brick-count 0</div></span></div><div><br></div><div><div>[root@stor3t \
~]# grep -n &quot;share&quot; \
/var/lib/glusterd/vols/volumed<wbr>isk1/*</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.<wbr>mnt-glusterfs-vol1-brick1.vol:<wbr>3: \
option shared-brick-count \
1<br></div><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.<wbr>mnt-glusterfs-vol1-brick1.vol.<wbr>rpmsave:3: \
option shared-brick-count \
1</div></span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.<wbr>mnt-glusterfs-vol1-brick1.vol:<wbr>3: \
option shared-brick-count \
1</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.<wbr>mnt-glusterfs-vol1-brick1.vol.<wbr>rpmsave:3: \
option shared-brick-count \
0</div></span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.<wbr>mnt-disk_c-glusterfs-vol1-<wbr>brick1.vol:3: \
option shared-brick-count \
1</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.<wbr>mnt-disk_c-glusterfs-vol1-<wbr>brick1.vol.rpmsave:3: \
option shared-brick-count \
0</div></span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.<wbr>mnt-disk_d-glusterfs-vol1-<wbr>brick1.vol:3: \
option shared-brick-count \
1</div><span><div>/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.<wbr>mnt-disk_d-glusterfs-vol1-<wbr>brick1.vol.rpmsave:3: \
option shared-brick-count 0</div></span></div><div><br></div><div \
class="gmail_extra">Thaks for your help,</div><div \
class="gmail_extra">Greetings.</div><span \
class="gmail-m_5857708467870841856HOEnZb"><font color="#888888"><div \
class="gmail_extra"><br></div><div class="gmail_extra">Jose \
V.</div></font></span><div><div class="gmail-m_5857708467870841856h5"><div \
class="gmail_extra"><br></div><div class="gmail_extra"><br><div \
class="gmail_quote">2018-02-28 5:07 GMT+01:00 Nithya Balachandran <span \
</p>
<p>Some days ago all my glusterfs configuration was working fine. Today I 
realized that the total size reported by df command was changed and is 
smaller than the aggregated capacity of all the bricks in the volume.</p>
<p>I checked that all the volumes status are fine, all the glusterd 
daemons are running, there is no error in logs,   however df shows a bad 
total size.<br>
</p>
<p>My configuration for one volume: volumedisk1</p>
[root@stor1 ~]# gluster volume status volumedisk1   detail<br>
<p>Status of volume: volumedisk1<br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
 Brick                               : Brick \
stor1data:/mnt/glusterfs/vol1/<wbr>brick1<br> TCP Port                         : \
49153                             <br> RDMA Port                       : 0            \
<br> Online                             : Y                                     <br>
Pid                                   : 13579                             <br>
File System                   : xfs                                 <br>
Device                             : /dev/sdc1                     <br>
Mount Options               : rw,noatime                   <br>
Inode Size                     : 512                                 <br>
Disk Space Free           : 35.0TB                           <br>
Total Disk Space         : 49.1TB                           <br>
Inode Count                   : 5273970048                   <br>
Free Inodes                   : 5273123069                   <br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
 Brick                               : Brick \
stor2data:/mnt/glusterfs/vol1/<wbr>brick1<br> TCP Port                         : \
49153                             <br> RDMA Port                       : 0            \
<br> Online                             : Y                                     <br>
Pid                                   : 13344                             <br>
File System                   : xfs                                 <br>
Device                             : /dev/sdc1                     <br>
Mount Options               : rw,noatime                   <br>
Inode Size                     : 512                                 <br>
Disk Space Free           : 35.0TB                           <br>
Total Disk Space         : 49.1TB                           <br>
Inode Count                   : 5273970048                   <br>
Free Inodes                   : 5273124718                   <br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
 Brick                               : Brick \
stor3data:/mnt/disk_c/glusterf<wbr>s/vol1/brick1<br> TCP Port                         \
: 49154                             <br> RDMA Port                       : 0          \
<br> Online                             : Y                                     <br>
Pid                                   : 17439                             <br>
File System                   : xfs                                 <br>
Device                             : /dev/sdc1                     <br>
Mount Options               : rw,noatime                   <br>
Inode Size                     : 512                                 <br>
Disk Space Free           : 35.7TB                           <br>
Total Disk Space         : 49.1TB                           <br>
Inode Count                   : 5273970048                   <br>
Free Inodes                   : 5273125437                   <br>
------------------------------<wbr>------------------------------<wbr>------------------<br>
 Brick                               : Brick \
stor3data:/mnt/disk_d/glusterf<wbr>s/vol1/brick1<br> TCP Port                         \
: 49155                             <br> RDMA Port                       : 0          \
<br> Online                             : Y                                     <br>
Pid                                   : 17459                             <br>
File System                   : xfs                                 <br>
Device                             : /dev/sdd1                     <br>
Mount Options               : rw,noatime                   <br>
Inode Size                     : 512                                 <br>
Disk Space Free           : 35.6TB                           <br>
Total Disk Space         : 49.1TB                           <br>
Inode Count                   : 5273970048                   <br>
Free Inodes                   : 5273127036                   <br>
  </p>
<p>Then full size for volumedisk1 should be: 49.1TB + 49.1TB + 49.1TB +49.1TB = \
<b>196,4 TB   </b>but df shows:<br> </p>
<p>[root@stor1 ~]# df -h<br>
Filesystem                       Size   Used Avail Use% Mounted on<br>
/dev/sda2                           48G     21G     25G   46% /<br>
tmpfs                                   32G     80K     32G     1% /dev/shm<br>
/dev/sda1                         190M     62M   119M   35% /boot<br>
/dev/sda4                         395G   251G   124G   68% /data<br>
/dev/sdb1                           26T   601G     25T     3% /mnt/glusterfs/vol0<br>
/dev/sdc1                           50T     15T     36T   29% /mnt/glusterfs/vol1<br>
stor1data:/volumedisk0<br>
                                             76T   1,6T     74T     3% \
/volumedisk0<br> stor1data:/volumedisk1<br>
                                           <b>148T</b>     42T   106T   29% \
/volumedisk1</p> <p>Exactly 1 brick minus: 196,4 TB - 49,1TB = 148TB<br>
</p>
<p>It&#39;s a production system so I hope you can help me.<br>
</p>
<p>Thanks in advance.</p>
<p>Jose V.</p>
<p><br>
</p>
<p>Below some other data of my configuration:<br>
</p>
<p>[root@stor1 ~]# gluster volume info<br>
  <br>
Volume Name: volumedisk0<br>
Type: Distribute<br>
Volume ID: 0ee52d94-1131-4061-bcef-bd8cf8<wbr>98da10<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 4<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: stor1data:/mnt/glusterfs/vol0/<wbr>brick1<br>
Brick2: stor2data:/mnt/glusterfs/vol0/<wbr>brick1<br>
Brick3: stor3data:/mnt/disk_b1/gluster<wbr>fs/vol0/brick1<br>
Brick4: stor3data:/mnt/disk_b2/gluster<wbr>fs/vol0/brick1<br>
Options Reconfigured:<br>
performance.cache-size: 4GB<br>
cluster.min-free-disk: 1%<br>
performance.io-thread-count: 16<br>
performance.readdir-ahead: on<br>
  <br>
Volume Name: volumedisk1<br>
Type: Distribute<br>
Volume ID: 591b7098-800e-4954-82a9-6b6d81<wbr>c9e0a2<br>
Status: Started<br>
Snapshot Count: 0<br>
Number of Bricks: 4<br>
Transport-type: tcp<br>
Bricks:<br>
Brick1: stor1data:/mnt/glusterfs/vol1/<wbr>brick1<br>
Brick2: stor2data:/mnt/glusterfs/vol1/<wbr>brick1<br>
Brick3: stor3data:/mnt/disk_c/glusterf<wbr>s/vol1/brick1<br>
Brick4: stor3data:/mnt/disk_d/glusterf<wbr>s/vol1/brick1<br>
Options Reconfigured:<br>
cluster.min-free-inodes: 6%<br>
performance.cache-size: 4GB<br>
cluster.min-free-disk: 1%<br>
performance.io-thread-count: 16<br>
performance.readdir-ahead: on</p>
<p>[root@stor1 ~]# grep -n &quot;share&quot; \
                /var/lib/glusterd/vols/volumed<wbr>isk1/*<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.mnt<wbr>-glusterfs-vol1-brick1.vol:3: \
                <wbr>     option shared-brick-count 1<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor1data.mnt<wbr>-glusterfs-vol1-brick1.vol.rpm<wbr>save:3: \
                option shared-brick-count 1<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.mnt<wbr>-glusterfs-vol1-brick1.vol:3: \
                <wbr>     option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor2data.mnt<wbr>-glusterfs-vol1-brick1.vol.rpm<wbr>save:3: \
                option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_c-glusterfs-vol1-brick1.<wbr>vol:3: \
                option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_c-glusterfs-vol1-brick1.<wbr>vol.rpmsave:3: \
  option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_d-glusterfs-vol1-brick1.<wbr>vol:3: \
                option shared-brick-count 0<br>
/var/lib/glusterd/vols/volumed<wbr>isk1/volumedisk1.stor3data.mnt<wbr>-disk_d-glusterfs-vol1-brick1.<wbr>vol.rpmsave:3: \
  option shared-brick-count 0<br>
</p>
<p><br>
</p></div>
<br></div></div>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" \
target="_blank">Gluster-users@gluster.org</a><br> <a \
href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" \
target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br></blockquote></div><br></div>
 </blockquote></div><br></div></div></div></div>
</blockquote></div></div></div><br></div></div>
</blockquote></div><br></div></div>



_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic