[prev in list] [next in list] [prev in thread] [next in thread] 

List:       afripv6-discuss
Subject:    =?windows-1252?Q?Re=3A_=5Bafripv6-discuss=5D_IPv6_rollout=85?=
From:       Hisham Ibrahim <hisham () afrinic ! net>
Date:       2012-08-03 8:36:27
Message-ID: 3B05800D-AE32-4719-A5ED-7F3B0DA7466A () afrinic ! net
[Download RAW message or body]

[Attachment #2 (multipart/alternative)]


Andrew,

Not that your ego needs any inflation :)

but since you and I started chatting a while back about the big IPv6 academia plans \
you had, I have been monitoring the traffic spikes from ZA to see it you really cause \
a dent.

According to google's 

http://www.google.com/ipv6/statistics.html#tab=per-country-ipv6-adoption

ZA's v6 adouption has jumped since last week from 0.04 to 0.18% 

Allowing ZA to steal frist place from ZM  (currently at 1.3 %) and leaving TZ with \
the bronze with a percentage of 1.1% (I clearly have the Olympic bug these days)

I think this is a clear sign that not only is Academia "key" for the v6 uptake in the \
region, but also taking the time and properly design your v4 network and getting rid \
of any NAT 

configuration you may have in the process, and then applying IPv6 on top of that \
clean topology, is the best course of action to move forward.

And lucky for us in our region we still have the opportunity to do that properly.

Andrew I am waiting on your documentation and hopefully we can duplicate your efforts \
in universities all over Africa.

Regards and more graphs will follow
Hisham 
  

On Jul 31, 2012, at 11:55 PM, Andrew Alston wrote:

> Hi Guys,
> 
> Ok a bit more info now that I have time to sit down and write, sorry things have \
> been rather hectic. 
> Here is how this came about, and a bit more of the full story.
> 
> The university in question was running a network without a real topology, in \
> essence, it was a flat network, v4 only, and a massive one at that.  This was \
> causing REAL issues, and it was the result of years and years of legacy.    The \
> decision was taken that IPv6 would be required, but, simply put, we had to fix the \
> network first.  So first step, how do you migrate from a flat network running a \
> single /16 flat, to a segmented network, and do it on a live campus environment \
> that has 38 thousand users on it every day using the network?  The answer... very \
> very carefully, with a lot of planning, and some very careful segment and IP \
> planning. 
> So, the following decisions were made:
> 
> A.) We get rid of NAT in entirety - if we were going to do this, we would do it \
> properly, and the dual stack would be on live IPs in identical topology v4 and v6 \
> network wide.   B.) We divide the network into a three tier network, core, \
> distribution and edge.  C.) Core/Distribution would be routed, would have to be \
> scalable, and we'd use SP style protocols to do this. D.) Edge would remain layer \
> 2, but we would choose not to span any L2 across distributions.  Since \
> Core/Distribution in future will be MPLS enabled, if we need L2 between to points, \
> we can EoMPLS it. 
> So we did our planning, and discovered (to our horror), that changing the topology, \
> eliminating NAT, and rolling out the wireless infrastructure we had planned, we'd \
> need a LOT more IPv4 space.  So, we applied and were granted a second /15 PI space \
> from AfriNIC.   We then started implementation. 
> First step:  Pick a network segment - we chose the student residences (not because \
> we hate the students, but because we wouldn't break any critical research if it all \
> went horribly wrong).  Then, we moved all the student residences between a single \
> distribution, our residence distribution.  At this point, Vlan 1 was still spanned \
> to the residence distribution, so NOTHING had broken in doing this, all was \
> working.  Then, we created a point to point link between that distribution and the \
> core.  On the point to point, we put a /30 v4 and a /126 v6.  (Sadly, the gear we \
> are using doesn't support either /31 or /127). 
> Then, we enabled ospf for v4 and ospf3 for v6 across the point to point.  There is \
> a slight difference in the topologies at this point because the ospf for v4 was \
> configured to ONLY carry the point to point and the loopbacks from the \
> distributions, the rest is covered by iBGP, where as in v6, the hardware didn't do \
> ipv6 bgp, so we had to do full route distribution of v6 in OSPF3.   
> Note at this point, we had still had zero downtime.
> 
> Then, for each residence between the distribution, we created a vlan, and assigned \
> it an IP segment and an IPv6 segment.  To avoid mass routes in our routing tables, \
> these segments were all taken out of supernets we had dedicated to each \
> distribution, and the supernets were what we pushed into the routing table on both \
> v4 and v6 level.  So, the vlans were now created on the distribution, the routing \
> was working.  Fixed up the DHCP for v4 as well, so that was in place and ready to \
> go with the correct scopes.  (Note, we are using RA for v6 at this point, we \
> haven't gotten around to DHCPv6 yet, so most people are still hitting the DNS \
> servers on v4 addresses, since we can't push DNS via RA).    
> Note: Still ZERO downtime to anyone
> 
> Then, we took the created vlan's on the distribution, trunked them down to the edge \
> switches, waited till after hours, and moved the edge ports into the correct \
> vlan's.  (Different vlans for student pcs and wireless aps etc).  The actual move \
> into the correct vlans was like, a single command on each switch.  Then simple \
> forced a port flap one very port as we went.  The port flap was to force a DHCP \
> reallocation on v4. 
> Bang, the residences came up on the new topology with v4 and v6 - total downtime to \
> the clients - less than 30 seconds. 
> Then, we did a rinse and repeat job through the various distributions (we're still \
> busy doing some of them, 6 outta 11 done so far, and probably around 300 or 400 \
> edge switches tagged correctly). 
> Once we were sure the topology was working, and the IPv6 was working, next step was \
> to enable the ipv6 on the proxy servers, so that they could fetch via IPv6.  We did \
> this, and instantly saw around 30% of the traffic coming in via v6, primarily \
> google, youtube, facebook and akamai.  Note however, at this point the clients were \
> still seeing the proxies via v4, though the proxies were fetching via v6.  So next \
> step, put in quad-a records for the proxy servers and for the pac file round robin. \
> Suddenly, everyone who had a v6 address was fetching from the proxy servers via v6, \
> irrespective of if the proxies were fetching v4 or v6. 
> Suddenly, we had a situation where 50% of the traffic coming in was via IPv6, and \
> we infact peaked at well over 100mbit of IPv6 traffic today coming in off the \
> Internet.   
> Our next steps of course are to migrate the rest of the distributions and edge to \
> the new network, and infact in the next 10 minutes we'll be moving another thousand \
> edge ports into this.  Once this is done, we'll start looking closely at the server \
> infrastructure and how we go about putting the rest of the production servers both \
> into the new topology and IPv6 enabling them.  We expect this to be the most \
> problematic part, since we know there are certain services which have issues with \
> IPv6, but we'll work around those when we get there. 
> In summary - it is entirely possible to take a network with around 15 thousand \
> wired network points, a few hundred wireless access points, a few thousand VOIP \
> phones and completely redeploy it both on a v4 and a v6 level with almost no \
> downtime if the planning is correct.  The traffic levels also prove, there is IPv6 \
> content out there, lots of it, and we're happy to use it!  It just takes some \
> planning, some forethought and some people willing to work really hard at strange \
> hours getting it done. 
> For interests sake, graphs can be seen here:
> 
> http://graphs.tenet.ac.za/iris/browser/browse?username=UFS&selectedmnemonicgroup=TSN81
>  
> The graph marked vl1081 is the IPv4 interface to TENET (The South African NREN), \
> the graph marked vl3081 is the IPv4 interface to TENET.  We specifically asked them \
> to provide v4 and v6 on separate interfaces as it did allow for us to see the \
> traffic on a more individual basis as well, which was useful. 
> Hope this answers some of the questions I have been sent off list and provides hope \
> for those who believe that IPv6 migration is impossible - never forget - we did it \
> on both v4 and v6 *at the same time*, on a live network, with no downtime, so if \
> anyone doubts it can be done, we're proof that it can. 
> Thanks
> 
> Andrew Alston
> Network Consultant
> 
> NOTE: I write the above as a private individual and private consultant and have \
> gained specific permission from my client (The University of the Free State) to \
> relay this story.  I would like to say a special thank you to them for allowing me \
> to share this with you as well. 
> 
> On 31 Jul 2012, at 3:58 PM, Maye diop <mayediop@gmail.com> wrote:
> 
> > Dear Andrew,
> > Congratulations.
> > I look forward more details to share with my universities.
> > Best Regards.
> > Le 31 juil. 2012 07:11, "Andrew Alston" <alston.networks@gmail.com> a écrit :
> > > 
> > > Hi Guys,
> > > 
> > > So, while i'll be sending out a lot more data soon, with a lot more information \
> > > on exactly what we did and how we did it etc, I thought  I would share some \
> > > news that I for one found rather exciting. 
> > > Yesterday evening starting at around 7pm one of the South African universities \
> > > turned up IPv6, in a fairly consistent manner.  Now, I'm not talking about \
> > > turning up IPv6 on a few servers, I'm talking about integrating it into every \
> > > part of their network.  By 2:30am this morning it was running on all their \
> > > proxy servers, all their residence networks, the wireless networks, all the lab \
> > > PC's and a good portion of the staff network.  The topology used was identical \
> > > to that of the IPv4, and as the rest of the network is migrated to the new IPv4 \
> > > topology V6 will be implemented on everything in dual stack along side that as \
> > > well. 
> > > Now, here is where things get interesting, another network dual stacked is no \
> > > real news, so lets talk about traffic levels. 
> > > The University in question is now running anywhere between 30 to 50 percent of \
> > > its internet traffic on IPv6, and its working flawlessly so far.  So flawlessly \
> > > infact that even with Apple's default implementation of Happy Eyeballs that \
> > > tests RTT and defaults to v4 if the v6 latency is higher, the apples we tested \
> > > on running lion and mountain lion were still choosing ipv6 most of the time. 
> > > I am not going to say this little rollout has been easy though, we had to \
> > > rearchitecture the entire network (that had to happen anyway for various \
> > > reasons), and we added the v6 as part of that project.  It would not have been \
> > > possible to do that without first getting our hands on another /15 worth of \
> > > IPv4 space though to allow that rearchitecturing to happen properly. 
> > > As I said though, in coming days we'll write up what we did with a lot more \
> > > detail and send through some graphs and other information, I just had to share \
> > > the fact that we're seeing at points half the traffic on a standard university \
> > > coming in from the internet over ipv6! 
> > > Thanks
> > > 
> > > Andrew Alston
> > > Network Consultant_______________________________________________
> > > afripv6-discuss mailing list
> > > afripv6-discuss@afrinic.net
> > > https://lists.afrinic.net/mailman/listinfo.cgi/afripv6-discuss
> > _______________________________________________
> > afripv6-discuss mailing list
> > afripv6-discuss@afrinic.net
> > https://lists.afrinic.net/mailman/listinfo.cgi/afripv6-discuss
> 
> _______________________________________________
> afripv6-discuss mailing list
> afripv6-discuss@afrinic.net
> https://lists.afrinic.net/mailman/listinfo.cgi/afripv6-discuss


[Attachment #5 (unknown)]

<html><head></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; \
-webkit-line-break: after-white-space; ">Andrew,<div><br></div><div>Not that your ego \
needs any inflation :)</div><div><br></div><div>but since you and I started chatting \
a while back about the big IPv6 academia plans you had, I have been monitoring the \
traffic spikes from ZA to see it you really cause a \
dent.</div><div><br></div><div>According to \
google's&nbsp;</div><div><br></div><div><a \
href="http://www.google.com/ipv6/statistics.html#tab=per-country-ipv6-adoption">http:/ \
/www.google.com/ipv6/statistics.html#tab=per-country-ipv6-adoption</a></div><div><br></div><div>ZA's \
v6&nbsp;adouption&nbsp;has jumped since last week from 0.04 to \
0.18%&nbsp;</div><div><br></div><div>Allowing ZA to steal frist place from ZM \
&nbsp;(currently at 1.3 %) and leaving TZ with the bronze with a percentage of 1.1% \
(I clearly have the Olympic bug these days)</div><div><br></div><div>I think this is \
a clear sign that not only is Academia "key" for the v6 uptake in the region, but \
also taking the time and properly design your v4 network and getting rid of \
any&nbsp;NAT&nbsp;</div><div><br></div><div>configuration you may have in the \
process, and then applying IPv6 on top of that clean topology, is the best course of \
action to move forward.</div><div><br></div><div>And lucky for us in our region we \
still have the opportunity to do that properly.</div><div><br></div><div>Andrew I am \
waiting on your documentation and hopefully we can duplicate your efforts in \
universities all over Africa.</div><div><br></div><div>Regards and more graphs will \
follow</div><div>Hisham&nbsp;</div><div>&nbsp;&nbsp;</div><div><br><div><div>On Jul \
31, 2012, at 11:55 PM, Andrew Alston wrote:</div><br \
class="Apple-interchange-newline"><blockquote type="cite"><div style="word-wrap: \
break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; ">Hi \
Guys,<div><br></div><div>Ok a bit more info now that I have time to sit down and \
write, sorry things have been rather hectic.</div><div><br></div><div>Here is how \
this came about, and a bit more of the full story.</div><div><br></div><div>The \
university in question was running a network without a real topology, in essence, it \
was a flat network, v4 only, and a massive one at that. &nbsp;This was causing REAL \
issues, and it was the result of years and years of legacy. &nbsp; &nbsp;The decision \
was taken that IPv6 would be required, but, simply put, we had to fix the network \
first. &nbsp;So first step, how do you migrate from a flat network running a single \
/16 flat, to a segmented network, and do it on a live campus environment that has 38 \
thousand users on it every day using the network? &nbsp;The answer... very very \
carefully, with a lot of planning, and some very careful segment and IP \
planning.</div><div><br></div><div>So, the following decisions were \
made:</div><div><br></div><div>A.) We get rid of NAT in entirety - if we were going \
to do this, we would do it properly, and the dual stack would be on live IPs in \
identical topology v4 and v6 network wide. &nbsp;</div><div>B.) We divide the network \
into a three tier network, core, distribution and edge.&nbsp;</div><div>C.) \
Core/Distribution would be routed, would have to be scalable, and we'd use SP style \
protocols to do this.</div><div>D.) Edge would remain layer 2, but we would choose \
not to span any L2 across distributions. &nbsp;Since Core/Distribution in future will \
be MPLS enabled, if we need L2 between to points, we can EoMPLS \
it.</div><div><br></div><div>So we did our planning, and discovered (to our horror), \
that changing the topology, eliminating NAT, and rolling out the wireless \
infrastructure we had planned, we'd need a LOT more IPv4 space. &nbsp;So, we applied \
and were granted a second /15 PI space from AfriNIC. &nbsp; We then started \
implementation.</div><div><br></div><div>First step: &nbsp;Pick a network segment - \
we chose the student residences (not because we hate the students, but because we \
wouldn't break any critical research if it all went horribly wrong). &nbsp;Then, we \
moved all the student residences between a single distribution, our residence \
distribution. &nbsp;At this point, Vlan 1 was still spanned to the residence \
distribution, so NOTHING had broken in doing this, all was working. &nbsp;Then, we \
created a point to point link between that distribution and the core. &nbsp;On the \
point to point, we put a /30 v4 and a /126 v6. &nbsp;(Sadly, the gear we are using \
doesn't support either /31 or /127).</div><div><br></div><div>Then, we enabled ospf \
for v4 and ospf3 for v6 across the point to point. &nbsp;There is a slight difference \
in the topologies at this point because the ospf for v4 was configured to ONLY carry \
the point to point and the loopbacks from the distributions, the rest is covered by \
iBGP, where as in v6, the hardware didn't do ipv6 bgp, so we had to do full route \
distribution of v6 in OSPF3. &nbsp;</div><div><br></div><div>Note at this point, we \
had still had zero downtime.</div><div><br></div><div>Then, for each residence \
between the distribution, we created a vlan, and assigned it an IP segment and an \
IPv6 segment. &nbsp;To avoid mass routes in our routing tables, these segments were \
all taken out of supernets we had dedicated to each distribution, and the supernets \
were what we pushed into the routing table on both v4 and v6 level. &nbsp;So, the \
vlans were now created on the distribution, the routing was working. &nbsp;Fixed up \
the DHCP for v4 as well, so that was in place and ready to go with the correct \
scopes. &nbsp;(Note, we are using RA for v6 at this point, we haven't gotten around \
to DHCPv6 yet, so most people are still hitting the DNS servers on v4 addresses, \
since we can't push DNS via RA). &nbsp;&nbsp;</div><div><br></div><div>Note: Still \
ZERO downtime to anyone</div><div><br></div><div>Then, we took the created vlan's on \
the distribution, trunked them down to the edge switches, waited till after hours, \
and moved the edge ports into the correct vlan's. &nbsp;(Different vlans for student \
pcs and wireless aps etc). &nbsp;The actual move into the correct vlans was like, a \
single command on each switch. &nbsp;Then simple forced a port flap one very port as \
we went. &nbsp;The port flap was to force a DHCP reallocation on \
v4.</div><div><br></div><div>Bang, the residences came up on the new topology with v4 \
and v6 - total downtime to the clients - less than 30 \
seconds.</div><div><br></div><div>Then, we did a rinse and repeat job through the \
various distributions (we're still busy doing some of them, 6 outta 11 done so far, \
and probably around 300 or 400 edge switches tagged \
correctly).</div><div><br></div><div>Once we were sure the topology was working, and \
the IPv6 was working, next step was to enable the ipv6 on the proxy servers, so that \
they could fetch via IPv6. &nbsp;We did this, and instantly saw around 30% of the \
traffic coming in via v6, primarily google, youtube, facebook and akamai. &nbsp;Note \
however, at this point the clients were still seeing the proxies via v4, though the \
proxies were fetching via v6. &nbsp;So next step, put in quad-a records for the proxy \
servers and for the pac file round robin. &nbsp;Suddenly, everyone who had a v6 \
address was fetching from the proxy servers via v6, irrespective of if the proxies \
were fetching v4 or v6.</div><div><br></div><div>Suddenly, we had a situation where \
50% of the traffic coming in was via IPv6, and we infact peaked at well over 100mbit \
of IPv6 traffic today coming in off the Internet. &nbsp;</div><div><br></div><div>Our \
next steps of course are to migrate the rest of the distributions and edge to the new \
network, and infact in the next 10 minutes we'll be moving another thousand edge \
ports into this. &nbsp;Once this is done, we'll start looking closely at the server \
infrastructure and how we go about putting the rest of the production servers both \
into the new topology and IPv6 enabling them. &nbsp;We expect this to be the most \
problematic part, since we know there are certain services which have issues with \
IPv6, but we'll work around those when we get there.</div><div><br></div><div>In \
summary - it is entirely possible to take a network with around 15 thousand wired \
network points, a few hundred wireless access points, a few thousand VOIP phones and \
completely redeploy it both on a v4 and a v6 level with almost no downtime if the \
planning is correct. &nbsp;The traffic levels also prove, there is IPv6 content out \
there, lots of it, and we're happy to use it! &nbsp;It just takes some planning, some \
forethought and some people willing to work really hard at strange hours getting it \
done.</div><div><br></div><div>For interests sake, graphs can be seen \
here:</div><div><br></div><div><a \
href="http://graphs.tenet.ac.za/iris/browser/browse?username=UFS&amp;selectedmnemonicg \
roup=TSN81">http://graphs.tenet.ac.za/iris/browser/browse?username=UFS&amp;selectedmnemonicgroup=TSN81</a></div><div><br></div><div>The \
graph marked vl1081 is the IPv4 interface to TENET (The South African NREN), the \
graph marked vl3081 is the IPv4 interface to TENET. &nbsp;We specifically asked them \
to provide v4 and v6 on separate interfaces as it did allow for us to see the traffic \
on a more individual basis as well, which was useful.</div><div><br></div><div>Hope \
this answers some of the questions I have been sent off list and provides hope for \
those who believe that IPv6 migration is impossible - never forget - we did it on \
both v4 and v6 *at the same time*, on a live network, with no downtime, so if anyone \
doubts it can be done, we're proof that it \
can.</div><div><br></div><div>Thanks</div><div><br></div><div>Andrew \
Alston</div><div>Network Consultant</div><div><br></div><div>NOTE: I write the above \
as a private individual and private consultant and have gained specific permission \
from my client (The University of the Free State) to relay this story. &nbsp;I would \
like to say a special thank you to them for allowing me to share this with you as \
well.</div><div><br></div><div><br></div><div><div><div>On 31 Jul 2012, at 3:58 PM, \
Maye diop &lt;<a href="mailto:mayediop@gmail.com">mayediop@gmail.com</a>&gt; \
wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><p>Dear \
Andrew,<br> Congratulations.<br>
I look forward more details to share with my universities.<br>
Best Regards.<br>
Le 31 juil. 2012 07:11, "Andrew Alston" &lt;<a \
href="mailto:alston.networks@gmail.com">alston.networks@gmail.com</a>&gt; a \
écrit&nbsp;:<br> &gt;<br>
&gt; Hi Guys,<br>
&gt;<br>
&gt; So, while i'll be sending out a lot more data soon, with a lot more information \
on exactly what we did and how we did it etc, I thought &nbsp;I would share some news \
that I for one found rather exciting.<br> &gt;<br>
&gt; Yesterday evening starting at around 7pm one of the South African universities \
turned up IPv6, in a fairly consistent manner. &nbsp;Now, I'm not talking about \
turning up IPv6 on a few servers, I'm talking about integrating it into every part of \
their network. &nbsp;By 2:30am this morning it was running on all their proxy \
servers, all their residence networks, the wireless networks, all the lab PC's and a \
good portion of the staff network. &nbsp;The topology used was identical to that of \
the IPv4, and as the rest of the network is migrated to the new IPv4 topology V6 will \
be implemented on everything in dual stack along side that as well.<br>

&gt;<br>
&gt; Now, here is where things get interesting, another network dual stacked is no \
real news, so lets talk about traffic levels.<br> &gt;<br>
&gt; The University in question is now running anywhere between 30 to 50 percent of \
its internet traffic on IPv6, and its working flawlessly so far. &nbsp;So flawlessly \
infact that even with Apple's default implementation of Happy Eyeballs that tests RTT \
and defaults to v4 if the v6 latency is higher, the apples we tested on running lion \
and mountain lion were still choosing ipv6 most of the time.<br>

&gt;<br>
&gt; I am not going to say this little rollout has been easy though, we had to \
rearchitecture the entire network (that had to happen anyway for various reasons), \
and we added the v6 as part of that project. &nbsp;It would not have been possible to \
do that without first getting our hands on another /15 worth of IPv4 space though to \
allow that rearchitecturing to happen properly.<br>

&gt;<br>
&gt; As I said though, in coming days we'll write up what we did with a lot more \
detail and send through some graphs and other information, I just had to share the \
fact that we're seeing at points half the traffic on a standard university coming in \
from the internet over ipv6!<br>

&gt;<br>
&gt; Thanks<br>
&gt;<br>
&gt; Andrew Alston<br>
&gt; Network Consultant_______________________________________________<br>
&gt; afripv6-discuss mailing list<br>
&gt; <a href="mailto:afripv6-discuss@afrinic.net">afripv6-discuss@afrinic.net</a><br>
&gt; <a href="https://lists.afrinic.net/mailman/listinfo.cgi/afripv6-discuss">https://lists.afrinic.net/mailman/listinfo.cgi/afripv6-discuss</a><br>
 </p>
_______________________________________________<br>afripv6-discuss mailing list<br><a \
href="mailto:afripv6-discuss@afrinic.net">afripv6-discuss@afrinic.net</a><br><a \
href="https://lists.afrinic.net/mailman/listinfo.cgi/afripv6-discuss">https://lists.af \
rinic.net/mailman/listinfo.cgi/afripv6-discuss</a><br></blockquote></div><br></div></div>_______________________________________________<br>afripv6-discuss \
mailing list<br><a href="mailto:afripv6-discuss@afrinic.net">afripv6-discuss@afrinic.net</a><br><a \
href="https://lists.afrinic.net/mailman/listinfo.cgi/afripv6-discuss">https://lists.af \
rinic.net/mailman/listinfo.cgi/afripv6-discuss</a><br></blockquote></div><br></div></body></html>




_______________________________________________
afripv6-discuss mailing list
afripv6-discuss@afrinic.net
https://lists.afrinic.net/mailman/listinfo.cgi/afripv6-discuss


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic