[prev in list] [next in list] [prev in thread] [next in thread] 

List:       firewalls-gc
Subject:    Re: Harping on dynamic DNS, was RE: Two ISP's to one DMZ
From:       "Mark Horn [ Net Ops ]" <mhorn () funb ! com>
Date:       1997-07-09 11:15:24
[Download RAW message or body]

I am a proponent of using BGP in preference of Dynamic DNS + NAT.  But I
don't agree with your math.

The first thing is that you assume that yahoo's ttl for their dns records
is 7 days.  It's 15 minutes.  Do:

        nslookup
        set q=soa
        yahoo.com
        set debug
        set q=a
        www.yahoo.com

The SOA record will show you the minimum ttl for all records in the zone.
The second query will show you what the specific value is for
www.yahoo.com.  This will generate a lot of output, but you'll see that
the ttl is 15 minutes.  Your calculations are based on a 20 minutes ttl.
If Yahoo is losing >30% of their bandwidth to DNS, don't you think they'd
notice this and change their ttl's?

Second, let's talk about the maximum possible DNS traffic per day at yahoo
(assuming that they have 30M hits per day).  The worst possible case
scenario would be that every single one of those hits generates a DNS
query.  Let's use your numbers and say that a DNS query is 200 bytes.

        (30M Web hits/day) * (1 DNS query/Web hit) * (200 bytes/DNS query)
                        =  6G bytes/day of DNS traffic

That is the maximum ammount of traffic that could be eaten by DNS in a
day - every web hit generates a DNS query.  The only way that it could be more
than that is if you had more than 30M web hits/day.

Looking at it another way, let's use the numbers that you concluded:
        
        (144GB DNS queries/day) / (200 bytes/DNS queries) 
                        = 720M DNS queries/day
        (720M DNS queries/day) / (30M Web hits/day) 
                        = 24 DNS queries/Web hit

Somewhere in your logic, you're coming to the conclusion that you require
24 DNS queries for every web hit.  It's certainly *possible* to do 24 DNS
queries per web hit, but I don't know of any software that does it.

While I don't know what the specific error is, I'm pretty certain that
your logic is wrong.

I see a more compelling reason to use BGP over Dynamic DNS + NAT.  And
that reason is convergence.  I read in your post that you've seen 20
minute convergence in BGP.  That has not been our experience.  We did
quite a bit of testing prior to deciding that we were going to use BGP.
In our tests, we found that convergence time around a network outage
averaged about 6 seconds (as fast as 2 seconds and as slow as 20
seconds).  And this was mostly the time that the router took to notice
that its interface was down.  We didn't have quick enough instrumentation
to determine the actual convergence time in the routing protocol alone
(i.e. without including the time for the router to notice the outage in
the interface).

For that same network coming back on line, it's a bit slower.  BGP seemed
to converge in a few minutes - as quickly as 2 minutes and as slowly as
10.

Based on these results, the worst case scenario for BGP is twice as fast
as Dynamic DNS + NAT.  I would love to hear more data about BGP
convergence from others who are using it.

-- 
Mark Horn <mhorn@funb.com>

PGP Public Key available from: http://www.es.net/hypertext/pgp.html
PGP KeyID/fingerprt: 00CBA571/32 4E 4E 48 EA C6 74 2E  25 8A 76 E6 04 A1 7F C1


Aaron J. Peterson says:
>Here, I'll do some math. Yahoo currently gets better than 30
>million hits per day, 35% of which are unique.  I'll graciously assume 100
>bytes each per DNS query & response, ignore referral traffic, and assume
>that NS entries have arbitrarily long expire times. Note that adding these
>factors whould only highten the difference.  Also, Yahoo is connected via
>a T3, which is ~=45Mbps. 
>
>So, as constants for a day of traffic we have (in base 10 units): 
>
>       30M hits * 35% unique ~=        10M hosts/day
>       10M queries * (200 bytes) =     2G bytes of query traffic
>       45Mbps/8*60*60*24 =             486G bytes/day avail. bandwidth
>
>With a ttl of 1 week used globally:
>       period = 7 days
>       2G bytes /7 day period  =       .286G bytes/day
>
>This amounts to 0.06% of Yahoo's available bandwidth.  This is reasonable.
>Now with a ttl of 20 minutes:
>                                            _
>       20 minutes * (1 day/1440 min) = 0.0138 days
>       period = 0.0139 days
>
>       2GB / 0.0139 day period =       144G bytes/day
>
>This amounts to *30%* of Yahoo's available bandwidth just for DNS traffic. 
>UGH! 30% of a T3!
>
>I am pretty sure my math is correct.  If so, that proves my point that
>being dynamic and decreasing the ttl accordingly breaks the scalability of
>DNS.  Look this over.  Confirm it.  Listen to our wise ARPA fathers, and
>feel guilty that you're causing the fall of the 'Net. ;^)
>
>This ignores the push-DNS stuff, but that has not been widely implemented
>yet and the technology is imperfect, to my knowledge.  Properly designed
>push techniques would mitigate the scale impact, but to an uncertain
>degree.  Distributed algorithms are such a bother.
>
>--
>Aaron J. Peterson
>Amatuer Mathematician & Pedantic Ass
>
>
>
>
>

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic