[prev in list] [next in list] [prev in thread] [next in thread] 

List:       drbd-user
Subject:    Re: [DRBD-user] Infiniband
From:       "Dr. Volker Jaenisch" <volker.jaenisch () inqbus ! de>
Date:       2008-11-22 14:29:02
Message-ID: 4928172E.60300 () inqbus ! de
[Download RAW message or body]

Hi Igor!

We run a 1.5 TB storage array replicated with DRBD over infiniband in
production.
It runs very stable and very fast. Some weeks ago one of the
DRBD-machines died due to a defective power-supply.
Nobody (we runs several productive XEN-VMs on this storage) except our
Nagios noticed the failover. It took some days to
get the power supply repaired. When the repaired machine comes up again the
resynchronisation was done in 12 seconds.

By the way. A great thank to the DRBD makers and the community for this
fine piece of software !

Igor Neves schrieb:
> It's possible to connect to infiniband cards back2back/crosscable?
Yes. We use Mellanox cards with a normal infiniband cable as "cross over cable". You \
need no special cable nor a switch. 

The only problem going back2back is the network logic. Most infiniband switches act \
as network controllers (aka subnet managers) wich controll the IB routing.

So if you like to work back2back without an infiniband switch you need to establish a \
subnet manager by software with e.g. opensm

https://wiki.openfabrics.org/tiki-index.php?page=OpenSM

at one of your two DRBD nodes.

With drbd over tcp over infinband we have a tcp tranfer rate of 2-3 GBit/sec full \
duplex . This is far below the bandwith of native infiniband (we messure 18.6GBit/sec \
back2back). Since drbd runs over tcp, only, the infiniband has to run in \
tcp-over-infiniband emulation mode wich slows down due to the protocol stack \
overhead.

I am looking foreward to a DRBD solution wich works over RDMA. :-)

But also in emulation mode the DRBD performance is astonishing. We meassure a \
continuosly resync rate of 100MByte/sec which is the write limit of the underlying \
disc-Arrays.

The latency of the tcp over infiniband line we messure is 4 times shorter than with \
expensive 1GB-Eth-NICs (back2back). As you all know -  latency is the common \
bottleneck for replication systems as DRBD. I have heard rumors that 10GBit-Eth will \
have also low latency - but I have not seen it with my own eyes. 

An other advantage is that you can deliver your DRBD-Array via iSCSI over Infiniband \
(using full speed RDMA) to the application servers.

Best Regards

Volker

-- 
====================================================
   inqbus it-consulting      +49 ( 341 )  5643800
   Dr.  Volker Jaenisch      http://www.inqbus.de
   Herloßsohnstr.    12      0 4 1 5 5    Leipzig
   N  O  T -  F Ä L L E      +49 ( 170 )  3113748
====================================================

_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic