[prev in list] [next in list] [prev in thread] [next in thread]
List: drbd-user
Subject: Re: [DRBD-user] drbd storage size
From: "Cesar Peschiera" <brain () click ! com ! py>
Date: 2014-10-25 11:38:05
Message-ID: D0CB3041D4DB42EE939B8BD1A026BFDD () gerencia
[Download RAW message or body]
Hi Meji
In three weeks i will have two Intel NIC X520-QDA1 of 40 Gb/s, according to
these link:
http://ark.intel.com/products/68672/Intel-Ethernet-Converged-Network-Adapter-X520-QDA1
http://www.intel.com/content/www/us/en/network-adapters/converged-network-adapters/ethernet-x520-qda1-brief.html
In my Hardware setup also i have a RAID controller H710p of Dell (LSI
chipset with 1 MB of cache) and with two groups of 4 HDDs SAS 15K RPM, each
group is configured in RAID 10, this setup is applied for each Server (the
HDDs for the OS are in other RAID), obviously i don't have much storage
compared to yours.
In these Servers i will have running DRBD 8.4.5 version
If you want to know the result of my tests, only let me know.
Best regards
Cesar
----- Original Message -----
From: "Meij, Henk" <hmeij@wesleyan.edu>
To: <drbd-user@lists.linbit.com>
Sent: Thursday, October 23, 2014 5:10 PM
Subject: Re: [DRBD-user] drbd storage size
> a) it turns out the counter (8847740/11287100)M goes down, not up, deh,
> never noticed
>
> b) ran plain rsync across eth0 (public, with switches/routers) and eth1
> (nic to nic)
> eth0 sent 585260755954 bytes received 10367 bytes 116690412.98 bytes/sec
> eth1 sent 585260755954 bytes received 10367 bytes 122580535.41 bytes/sec
> so my LSI raid card is behaving and DRBD is slowing the initialization
> down somehow.
> Found chapter 15 and will try some suggestions but ideas welcome.
>
> c) for grins
> version: 8.4.5 (api:1/proto:86-101)
> GIT-hash: 1d360bde0e095d495786eaeb2a1ac76888e4db96 build by
> mockbuild@Build64R6, 2014-08-17 19:26:04
> 0: cs:SyncTarget ro:Secondary/Secondary ds:Inconsistent/UpToDate C r-----
> ns:0 nr:3601728 dw:3601408 dr:0 al:0 bm:0 lo:4 pe:11 ua:3 ap:0 ep:1
> wo:f oos:109374215324
> [>....................] sync'ed: 0.1% (106810756/106814272)M
> finish: 731:23:05 speed: 41,532 (31,868) want: 41,000 K/sec
>
> 100TB in 731 hours would be 30 days. Can I expect large delta data
> replication to go equally slow using DRDB?
>
> -Henk
>
>
>
> ________________________________________
> From: drbd-user-bounces@lists.linbit.com
> [drbd-user-bounces@lists.linbit.com] on behalf of Meij, Henk
> [hmeij@wesleyan.edu]
> Sent: Thursday, October 23, 2014 9:57 AM
> To: Philipp Reisner; drbd-user@lists.linbit.com
> Subject: Re: [DRBD-user] drbd storage size
>
> Thanks for the write up y'll. I'll have to think about #3 not sure I
> grasp it fully.
>
> Last night I started a 12 TB test and started first initialization for
> observation (0 is primary).
> I have node0:eth1 wired directly into node1:eth1 with 10 foot CAT 6 cable
> (MTU=9000)
> Data from node1 to node0
> PING 10.10.52.232 (10.10.52.232) 8970(8998) bytes of data.
> 8978 bytes from 10.10.52.232: icmp_seq=1 ttl=64 time=0.316 ms
>
> This morning's progress report from node1:(drbd v8.4.5)
>
> [===>................] sync'ed: 21.7% (8847740/11287100)M
> finish: 62:50:50 speed: 40,032 (39,008) want: 68,840 K/sec
>
> which confuses me: 8.8M out of 11.3M is 77.8% synced, not? I will let this
> test finish before I do a dd attempt.
>
> iostat reveals %idle cpu 99%+ and little to no %iowait (near 0%), iotop
> confirms very little IO (<5 K/s), typical data
> Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
> avgqu-sz await svctm %util
> sdb1 0.00 231.00 0.00 156.33 0.00 79194.67 506.58
> 0.46 2.94 1.49 23.37
>
> Something is throttling this IO as 40M/s is about half of what I was
> hoping for. Will dig some more.
>
> -Henk
>
> ________________________________________
> From: drbd-user-bounces@lists.linbit.com
> [drbd-user-bounces@lists.linbit.com] on behalf of Philipp Reisner
> [philipp.reisner@linbit.com]
> Sent: Thursday, October 23, 2014 9:17 AM
> To: drbd-user@lists.linbit.com
> Subject: Re: [DRBD-user] drbd storage size
>
> Am Donnerstag, 23. Oktober 2014, 08:55:03 schrieb Digimer:
> > On 23/10/14 04:00 AM, Philipp Reisner wrote:
> > > 2a) Initialize both backend devices to a known state.
> > >
> > > I.e. dd if=/dev/zero of=/dev/sdb1 bs=$((1024*1024)) oflag=direct
> >
> > Question;
> >
> > What I've done in the past to speed up initial sync is to create the
> > DRBD device, pause-sync, then do your 'dd if=/dev/zero ...' trick to
> > /dev/drbd0. This effectively drives the resync speed to the max possible
> > and ensures full sync across both nodes. Is this a sane approach?
> >
>
> Yes, sure that is a way to do it. (I have the impression that is something
> form the drbd-8.3 world.)
>
> I do not know from the top of the head if that will be faster than the
> built-in background resync in drbd-8.4.
>
> Best,
> Phil
>
> _______________________________________________
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
> _______________________________________________
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
> _______________________________________________
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
[prev in list] [next in list] [prev in thread] [next in thread]
Configure |
About |
News |
Add a list |
Sponsored by KoreLogic