[prev in list] [next in list] [prev in thread] [next in thread]
List: gluster-users
Subject: Re: [Gluster-users] Gluster performance / Dell Idrac enterprise conflict
From: Ryan Wilkinson <ryanwilk () gmail ! com>
Date: 2018-02-27 15:03:44
Message-ID: CAOC1=1X5+SU9=L3Z-SjJKAanLHafweGrDT8UnRfnKP3VViBWZQ () mail ! gmail ! com
[Download RAW message or body]
[Attachment #2 (multipart/alternative)]
All volumes are configured as replica 3. I have no arbiter volumes.
Storage hosts are for storage only and Virt hosts are dedicated Virt
hosts. I've checked throughput from the Virt hosts to all 3 gluster hosts
and am getting ~9Gb/s.
On Tue, Feb 27, 2018 at 1:33 AM, Alex K <rightkicktech@gmail.com> wrote:
> What is your gluster setup? Please share volume details where vms ate
> stored. It could be that the slow host is having arbiter volume.
>
> Alex
>
> On Feb 26, 2018 13:46, "Ryan Wilkinson" <ryanwilk@gmail.com> wrote:
>
> > Here is info. about the Raid controllers. Doesn't seem to be the culprit.
> >
> > Slow host:
> > Name PERC H710 Mini (Embedded)
> > Firmware Version 21.3.4-0001
> > Cache Memory Size 512 MB
> > Fast Host:
> >
> > Name PERC H310 Mini (Embedded)
> > Firmware Version 20.12.1-0002
> > Cache Memory Size 0 MB
> > Slow host:
> > Name PERC H310 Mini (Embedded)
> > Firmware Version 20.13.1-0002
> > Cache Memory Size 0 MB
> > Slow host:
> > Name PERC H310 Mini (Embedded)
> > Firmware Version 20.13.3-0001 Cache Memory Size 0 MB
> > Slow Host:
> > Name PERC H710 Mini (Embedded)
> > Firmware Version 21.3.5-0002
> > Cache Memory Size 512 MB
> > Fast Host
> > Perc H730
> > Cache Memory Size 1GB
> >
> > On Mon, Feb 26, 2018 at 9:42 AM, Alvin Starr <alvin@netvel.net> wrote:
> >
> > > I would be really supprised if the problem was related to Idrac.
> > >
> > > The Idrac processor is a stand alone cpu with its own nic and runs
> > > independent of the main CPU.
> > >
> > > That being said it does have visibility into the whole system.
> > >
> > > try using dmidecode to compare the systems and take a close look at the
> > > raid controllers and what size and form of cache they have.
> > >
> > > On 02/26/2018 11:34 AM, Ryan Wilkinson wrote:
> > >
> > > I've tested about 12 different Dell servers. Ony a couple of them have
> > > Idrac express and all the others have Idrac Enterprise. All the boxes with
> > > Enterprise perform poorly and the couple that have express perform well. I
> > > use the disks in raid mode on all of them. I've tried a few non-Dell boxes
> > > and they all perform well even though some of them are very old. I've also
> > > tried disabling Idrac, the Idrac nic, virtual storage for Idrac with no
> > > sucess..
> > >
> > > On Mon, Feb 26, 2018 at 9:28 AM, Serkan Çoban <cobanserkan@gmail.com>
> > > wrote:
> > >
> > > > I don't think it is related with iDRAC itself but some configuration
> > > > is wrong or there is some hw error.
> > > > Did you check battery of raid controller? Do you use disks in jbod
> > > > mode or raid mode?
> > > >
> > > > On Mon, Feb 26, 2018 at 6:12 PM, Ryan Wilkinson <ryanwilk@gmail.com>
> > > > wrote:
> > > > > Thanks for the suggestion. I tried both of these with no difference
> > > > in
> > > > > performance.I have tried several other Dell hosts with Idrac
> > > > Enterprise and
> > > > > getting the same results. I also tried a new Dell T130 with Idrac
> > > > express
> > > > > and was getting over 700 MB/s. Any other users had this issues with
> > > > Idrac
> > > > > Enterprise??
> > > > >
> > > > >
> > > > > On Thu, Feb 22, 2018 at 12:16 AM, Serkan Çoban <cobanserkan@gmail.com
> > > > >
> > > > > wrote:
> > > > > >
> > > > > > "Did you check the BIOS/Power settings? They should be set for high
> > > > > > performance.
> > > > > > Also you can try to boot "intel_idle.max_cstate=0" kernel command
> > > > line
> > > > > > option to be sure CPUs not entering power saving states.
> > > > > >
> > > > > > On Thu, Feb 22, 2018 at 9:59 AM, Ryan Wilkinson <ryanwilk@gmail.com>
> > > > > > wrote:
> > > > > > >
> > > > > > >
> > > > > > > I have a 3 host gluster replicated cluster that is providing
> > > > storage for
> > > > > > > our
> > > > > > > RHEV environment. We've been having issues with inconsistent
> > > > > > > performance
> > > > > > > from the VMs depending on which Hypervisor they are running on.
> > > > I've
> > > > > > > confirmed throughput to be ~9Gb/s to each of the storage hosts
> > > > from the
> > > > > > > hypervisors. I'm getting ~300MB/s disk read spead when our test
> > > > vm is
> > > > > > > on
> > > > > > > the slow Hypervisors and over 500 on the faster ones. The
> > > > performance
> > > > > > > doesn't seem to be affected much by the cpu, memory that are in the
> > > > > > > hypervisors. I have tried a couple of really old boxes and got
> > > > over 500
> > > > > > > MB/s. The common thread seems to be that the poorly perfoming
> > > > hosts all
> > > > > > > have Dell's Idrac 7 Enterprise. I have one Hypervisor that has
> > > > Idrac 7
> > > > > > > express and it performs well. We've compared system packages and
> > > > > > > versions
> > > > > > > til we're blue in the face and have been struggling with this for a
> > > > > > > couple
> > > > > > > months but that seems to be the only common denominator. I've
> > > > tried on
> > > > > > > one
> > > > > > > of those Idrac 7 hosts to disable the nic, virtual drive, etc,
> > > > etc. but
> > > > > > > no
> > > > > > > change in performance. In addition, I tried 5 new hosts and all
> > > > are
> > > > > > > complying to the Idrac enterprise theory. Anyone else had this
> > > > issue?!
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > _______________________________________________
> > > > > > > Gluster-users mailing list
> > > > > > > Gluster-users@gluster.org
> > > > > > > http://lists.gluster.org/mailman/listinfo/gluster-users
> > > > >
> > > > >
> > > >
> > >
> > >
> > >
> > > _______________________________________________
> > > Gluster-users mailing \
> > > listGluster-users@gluster.orghttp://lists.gluster.org/mailman/listinfo/gluster-users
> > >
> > >
> > > --
> > > Alvin Starr || land: (905)513-7688 <(905)%20513-7688>
> > > Netvel Inc. || Cell: (416)806-0133 \
> > > <(416)%20806-0133>alvin@netvel.net ||
> > >
> > >
> > > _______________________________________________
> > > Gluster-users mailing list
> > > Gluster-users@gluster.org
> > > http://lists.gluster.org/mailman/listinfo/gluster-users
> > >
> >
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users@gluster.org
> > http://lists.gluster.org/mailman/listinfo/gluster-users
> >
>
[Attachment #5 (text/html)]
<div dir="ltr">All volumes are configured as replica 3. I have no arbiter volumes. \
Storage hosts are for storage only and Virt hosts are dedicated Virt hosts. \
I've checked throughput from the Virt hosts to all 3 gluster hosts and am getting \
~9Gb/s.<br><div><input name="virtru-metadata" \
value="{"email-policy":{"state":"closed","expiratio \
nUnit":"days","disableCopyPaste":false,"disablePrint&quo \
t;:false,"disableForwarding":false,"expires":false,"isManaged \
":false},"attachments":{},"compose-window":{"secure":false}}" \
type="hidden"></div><div class="gmail_extra" style="display:block"><br><div \
class="gmail_quote">On Tue, Feb 27, 2018 at 1:33 AM, Alex K <span dir="ltr"><<a \
href="mailto:rightkicktech@gmail.com" \
target="_blank">rightkicktech@gmail.com</a>></span> wrote:<br><blockquote \
class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc \
solid;padding-left:1ex"><div dir="auto">What is your gluster setup? Please share \
volume details where vms ate stored. It could be that the slow host is having arbiter \
volume.<span class="HOEnZb"><font color="#888888"><div dir="auto"><br></div><div \
dir="auto">Alex</div></font></span></div><div class="HOEnZb"><div class="h5"><div \
class="gmail_extra" style="display:block"><br><div class="gmail_quote">On Feb 26, \
2018 13:46, "Ryan Wilkinson" <<a href="mailto:ryanwilk@gmail.com" \
target="_blank">ryanwilk@gmail.com</a>> wrote:<br type="attribution"><blockquote \
class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc \
solid;padding-left:1ex"><div dir="ltr"><div><div><div><div>Here is info. about the \
Raid controllers. Doesn't seem to be the culprit.<br><br></div>Slow host:<br> \
<table class="m_-9102982297513873227m_-2891864877850881578gmail-infolist" \
id="m_-9102982297513873227m_-2891864877850881578gmail-leftTablenewTablerow_301|C|RAID.Integrated.1-1"><tbody><tr><td \
class="m_-9102982297513873227m_-2891864877850881578gmail-item" \
style="vertical-align:middle"><span>Name </span></td><td><span>PERC H710 Mini \
(Embedded)</span></td></tr></tbody></table>
<table class="m_-9102982297513873227m_-2891864877850881578gmail-infolist" \
id="m_-9102982297513873227m_-2891864877850881578gmail-leftTablenewTablerow_301|C|RAID.Integrated.1-1"><tbody><tr><td \
class="m_-9102982297513873227m_-2891864877850881578gmail-item" \
style="vertical-align:middle"><span>Firmware Version \
</span></td><td><span>21.3.4-0001</span></td></tr></tbody></table>
<table class="m_-9102982297513873227m_-2891864877850881578gmail-infolist" \
id="m_-9102982297513873227m_-2891864877850881578gmail-leftTablenewTablerow_301|C|RAID.Integrated.1-1"><tbody><tr><td \
class="m_-9102982297513873227m_-2891864877850881578gmail-item" \
style="vertical-align:middle"><span>Cache Memory Size </span></td><td><span>512 \
MB</span></td></tr></tbody></table>
<br></div>Fast Host:<br>
<table class="m_-9102982297513873227m_-2891864877850881578gmail-infolist" \
id="m_-9102982297513873227m_-2891864877850881578gmail-leftTablenewTablerow_301|C|RAID.Integrated.1-1"><tbody>
</tbody></table><table \
class="m_-9102982297513873227m_-2891864877850881578gmail-infolist" \
id="m_-9102982297513873227m_-2891864877850881578gmail-leftTablenewTablerow_301|C|RAID.Integrated.1-1"><tbody><tr><td \
class="m_-9102982297513873227m_-2891864877850881578gmail-item" \
style="vertical-align:middle"><br></td><td> <table \
class="m_-9102982297513873227m_-2891864877850881578gmail-infolist" \
id="m_-9102982297513873227m_-2891864877850881578gmail-leftTablenewTablerow_301|C|RAID.Integrated.1-1"><tbody><tr><td \
class="m_-9102982297513873227m_-2891864877850881578gmail-item" \
style="vertical-align:middle"><span>Name </span></td><td><span>PERC H310 Mini \
(Embedded)</span></td></tr></tbody></table>
<table class="m_-9102982297513873227m_-2891864877850881578gmail-infolist" \
id="m_-9102982297513873227m_-2891864877850881578gmail-leftTablenewTablerow_301|C|RAID.Integrated.1-1"><tbody><tr><td \
class="m_-9102982297513873227m_-2891864877850881578gmail-item" \
style="vertical-align:middle"><span>Firmware Version \
</span></td><td><span>20.12.1-0002</span></td></tr></tbody></table>
<table class="m_-9102982297513873227m_-2891864877850881578gmail-infolist" \
id="m_-9102982297513873227m_-2891864877850881578gmail-leftTablenewTablerow_301|C|RAID.Integrated.1-1"><tbody><tr><td \
class="m_-9102982297513873227m_-2891864877850881578gmail-item" \
style="vertical-align:middle"><span>Cache Memory Size </span></td><td><span>0 \
MB</span></td></tr></tbody></table>
<br></td></tr></tbody></table>
Slow host:<br>
<table class="m_-9102982297513873227m_-2891864877850881578gmail-infolist" \
id="m_-9102982297513873227m_-2891864877850881578gmail-leftTablenewTablerow_301|C|RAID.Integrated.1-1"><tbody><tr><td \
class="m_-9102982297513873227m_-2891864877850881578gmail-item" \
style="vertical-align:middle"><span>Name </span></td><td><span>PERC H310 Mini \
(Embedded)</span></td></tr></tbody></table>
<table class="m_-9102982297513873227m_-2891864877850881578gmail-infolist" \
id="m_-9102982297513873227m_-2891864877850881578gmail-leftTablenewTablerow_301|C|RAID.Integrated.1-1"><tbody><tr><td \
class="m_-9102982297513873227m_-2891864877850881578gmail-item" \
style="vertical-align:middle"><span>Firmware Version \
</span></td><td><span>20.13.1-0002</span></td></tr></tbody></table>
<table class="m_-9102982297513873227m_-2891864877850881578gmail-infolist" \
id="m_-9102982297513873227m_-2891864877850881578gmail-leftTablenewTablerow_301|C|RAID.Integrated.1-1"><tbody><tr><td \
class="m_-9102982297513873227m_-2891864877850881578gmail-item" \
style="vertical-align:middle"><span>Cache Memory Size </span></td><td><span>0 \
MB</span></td></tr></tbody></table>
<br></div>Slow host:<br>
<table class="m_-9102982297513873227m_-2891864877850881578gmail-infolist" \
id="m_-9102982297513873227m_-2891864877850881578gmail-leftTablenewTablerow_301|C|RAID.Integrated.1-1"><tbody><tr><td \
class="m_-9102982297513873227m_-2891864877850881578gmail-item" \
style="vertical-align:middle"><span>Name </span></td><td><span>PERC H310 Mini \
(Embedded)</span></td></tr></tbody></table>
<table class="m_-9102982297513873227m_-2891864877850881578gmail-infolist" \
id="m_-9102982297513873227m_-2891864877850881578gmail-leftTablenewTablerow_301|C|RAID.Integrated.1-1"><tbody><tr><td \
class="m_-9102982297513873227m_-2891864877850881578gmail-item" \
style="vertical-align:middle"><span> <table \
class="m_-9102982297513873227m_-2891864877850881578gmail-infolist" \
id="m_-9102982297513873227m_-2891864877850881578gmail-leftTablenewTablerow_301|C|RAID.Integrated.1-1"><tbody><tr><td \
class="m_-9102982297513873227m_-2891864877850881578gmail-item" \
style="vertical-align:middle"><span>Firmware Version \
</span></td><td><span>20.13.3-0001</span></td></tr></tbody></table>
Cache Memory Size </span></td><td><span>0 MB</span></td></tr></tbody></table>
<br></div>Slow Host:<br><div>
<table class="m_-9102982297513873227m_-2891864877850881578gmail-infolist" \
id="m_-9102982297513873227m_-2891864877850881578gmail-leftTablenewTablerow_301|C|RAID.Integrated.1-1"><tbody><tr><td \
class="m_-9102982297513873227m_-2891864877850881578gmail-item" \
style="vertical-align:middle"><span>Name </span></td><td><span>PERC H710 Mini \
(Embedded)</span></td></tr></tbody></table>
<table class="m_-9102982297513873227m_-2891864877850881578gmail-infolist" \
id="m_-9102982297513873227m_-2891864877850881578gmail-leftTablenewTablerow_301|C|RAID.Integrated.1-1"><tbody><tr><td \
class="m_-9102982297513873227m_-2891864877850881578gmail-item" \
style="vertical-align:middle"><span>Firmware Version \
</span></td><td><span>21.3.5-0002</span></td></tr></tbody></table>
<table class="m_-9102982297513873227m_-2891864877850881578gmail-infolist" \
id="m_-9102982297513873227m_-2891864877850881578gmail-leftTablenewTablerow_301|C|RAID.Integrated.1-1"><tbody><tr><td \
class="m_-9102982297513873227m_-2891864877850881578gmail-item" \
style="vertical-align:middle"><span>Cache Memory Size </span></td><td><span>512 \
MB</span></td></tr></tbody></table>
<br></div><div>Fast Host<br></div><div>Perc H730<br></div><div>Cache Memory Size \
1GB<br></div><div><div><div><div><div><div><input name="virtru-metadata" \
value="{"email-policy":{"state":"closed","expiratio \
nUnit":"days","disableCopyPaste":false,"disablePrint&quo \
t;:false,"disableForwarding":false,"expires":false,"isManaged \
":false},"attachments":{},"compose-window":{"secure":false}}" \
type="hidden"></div></div></div></div></div></div><div class="gmail_extra" \
style="display:block"><br><div class="gmail_quote">On Mon, Feb 26, 2018 at 9:42 AM, \
Alvin Starr <span dir="ltr"><<a href="mailto:alvin@netvel.net" \
target="_blank">alvin@netvel.net</a>></span> wrote:<br><blockquote \
class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc \
solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
<p>I would be really supprised if the problem was related to Idrac.</p>
<p>The Idrac processor is a stand alone cpu with its own nic and
runs independent of the main CPU.</p>
<p>That being said it does have visibility into the whole system.</p>
<p>try using dmidecode to compare the systems and take a close look
at the raid controllers and what size and form of cache they have.<br>
</p><div><div class="m_-9102982297513873227m_-2891864877850881578h5">
<br>
<div class="m_-9102982297513873227m_-2891864877850881578m_6558142319231663658moz-cite-prefix">On \
02/26/2018 11:34 AM, Ryan Wilkinson wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">I've tested about 12 different Dell servers. Ony a
couple of them have Idrac express and all the others have Idrac
Enterprise. All the boxes with Enterprise perform poorly and
the couple that have express perform well. I use the disks in
raid mode on all of them. I've tried a few non-Dell boxes and
they all perform well even though some of them are very old.
I've also tried disabling Idrac, the Idrac nic, virtual storage
for Idrac with no sucess..<br>
<div><input name="virtru-metadata" \
value="{"email-policy":{"state":"closed","expiratio \
nUnit":"days","disableCopyPaste":false,"disablePrint&quo \
t;:false,"disableForwarding":false,"expires":false,"isManaged \
":false},"attachments":{},"compose-window":{"secure":false}}" \
type="hidden"></div> <div class="gmail_extra" style="display:block"><br>
<div class="gmail_quote">On Mon, Feb 26, 2018 at 9:28 AM,
Serkan Çoban <span dir="ltr"><<a href="mailto:cobanserkan@gmail.com" \
target="_blank">cobanserkan@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px \
#ccc solid;padding-left:1ex">I don't think it is related with iDRAC itself but \
some configuration<br>
is wrong or there is some hw error.<br>
Did you check battery of raid controller? Do you use disks
in jbod<br>
mode or raid mode?<br>
<div class="m_-9102982297513873227m_-2891864877850881578m_6558142319231663658HOEnZb">
<div \
class="m_-9102982297513873227m_-2891864877850881578m_6558142319231663658h5"><br>
On Mon, Feb 26, 2018 at 6:12 PM, Ryan Wilkinson <<a \
href="mailto:ryanwilk@gmail.com" target="_blank">ryanwilk@gmail.com</a>> \
wrote:<br> > Thanks for the suggestion. I tried both of these
with no difference in<br>
> performance.I have tried several other Dell hosts
with Idrac Enterprise and<br>
> getting the same results. I also tried a new
Dell T130 with Idrac express<br>
> and was getting over 700 MB/s. Any other users
had this issues with Idrac<br>
> Enterprise??<br>
><br>
><br>
> On Thu, Feb 22, 2018 at 12:16 AM, Serkan Çoban
<<a href="mailto:cobanserkan@gmail.com" \
target="_blank">cobanserkan@gmail.com</a>><br> > wrote:<br>
>><br>
>> "Did you check the BIOS/Power settings? They
should be set for high<br>
>> performance.<br>
>> Also you can try to boot
"intel_idle.max_cstate=0" kernel command line<br>
>> option to be sure CPUs not entering power
saving states.<br>
>><br>
>> On Thu, Feb 22, 2018 at 9:59 AM, Ryan
Wilkinson <<a href="mailto:ryanwilk@gmail.com" \
target="_blank">ryanwilk@gmail.com</a>><br> >> wrote:<br>
>> ><br>
>> ><br>
>> > I have a 3 host gluster replicated
cluster that is providing storage for<br>
>> > our<br>
>> > RHEV environment. We've been having
issues with inconsistent<br>
>> > performance<br>
>> > from the VMs depending on which
Hypervisor they are running on. I've<br>
>> > confirmed throughput to be ~9Gb/s to
each of the storage hosts from the<br>
>> > hypervisors. I'm getting ~300MB/s disk
read spead when our test vm is<br>
>> > on<br>
>> > the slow Hypervisors and over 500 on the
faster ones. The performance<br>
>> > doesn't seem to be affected much by the
cpu, memory that are in the<br>
>> > hypervisors. I have tried a couple of
really old boxes and got over 500<br>
>> > MB/s. The common thread seems to be
that the poorly perfoming hosts all<br>
>> > have Dell's Idrac 7 Enterprise. I have
one Hypervisor that has Idrac 7<br>
>> > express and it performs well. We've
compared system packages and<br>
>> > versions<br>
>> > til we're blue in the face and have been
struggling with this for a<br>
>> > couple<br>
>> > months but that seems to be the only
common denominator. I've tried on<br>
>> > one<br>
>> > of those Idrac 7 hosts to disable the
nic, virtual drive, etc, etc. but<br>
>> > no<br>
>> > change in performance. In addition, I
tried 5 new hosts and all are<br>
>> > complying to the Idrac enterprise
theory. Anyone else had this issue?!<br>
>> ><br>
>> ><br>
>> ><br>
>> > \
______________________________<wbr>_________________<br> >> > Gluster-users \
mailing list<br>
>> > <a href="mailto:Gluster-users@gluster.org" \
target="_blank">Gluster-users@gluster.org</a><br> >> > <a \
href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" \
target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br> \
><br> ><br>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
<br>
<fieldset class="m_-9102982297513873227m_-2891864877850881578m_6558142319231663658mimeAttachmentHeader"></fieldset>
<br>
<pre>______________________________<wbr>_________________
Gluster-users mailing list
<a class="m_-9102982297513873227m_-2891864877850881578m_6558142319231663658moz-txt-link-abbreviated" \
href="mailto:Gluster-users@gluster.org" target="_blank">Gluster-users@gluster.org</a> \
<a class="m_-9102982297513873227m_-2891864877850881578m_6558142319231663658moz-txt-link-freetext" \
href="http://lists.gluster.org/mailman/listinfo/gluster-users" \
target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a></pre>
</blockquote>
<br>
</div></div><span \
class="m_-9102982297513873227m_-2891864877850881578HOEnZb"><font color="#888888"><pre \
class="m_-9102982297513873227m_-2891864877850881578m_6558142319231663658moz-signature" \
cols="72">-- Alvin Starr || land: <a \
href="tel:(905)%20513-7688" value="+19055137688" target="_blank">(905)513-7688</a> \
Netvel Inc. || Cell: <a href="tel:(416)%20806-0133" \
value="+14168060133" target="_blank">(416)806-0133</a> <a \
class="m_-9102982297513873227m_-2891864877850881578m_6558142319231663658moz-txt-link-abbreviated" \
href="mailto:alvin@netvel.net" target="_blank">alvin@netvel.net</a> ||
</pre>
</font></span></div>
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" \
target="_blank">Gluster-users@gluster.org</a><br> <a \
href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" \
target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br></blockquote></div><br></div></div>
<br>______________________________<wbr>_________________<br>
Gluster-users mailing list<br>
<a href="mailto:Gluster-users@gluster.org" \
target="_blank">Gluster-users@gluster.org</a><br> <a \
href="http://lists.gluster.org/mailman/listinfo/gluster-users" rel="noreferrer" \
target="_blank">http://lists.gluster.org/mailm<wbr>an/listinfo/gluster-users</a><br></blockquote></div></div>
</div></div></blockquote></div><br></div></div>
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic