[prev in list] [next in list] [prev in thread] [next in thread] 

List:       focus-virus
Subject:    Possible Followup to Code Red
From:       "Sean McPherson" <seanmcp () hotmail ! com>
Date:       2001-07-31 12:37:33
[Download RAW message or body]

All,

I have written up a plausible set of scenarios for what I think we might see 
as a followup to Code Red. The logic of the worm has been analysed and the 
information released on why it has been effective, and more importantly, why 
it has been less effective than it could have been.

The next version(s) may be either more destructive or have a broader target 
than a single site in the Net. Please take a look at the attached/included 
.txt file, and let me know what you think, since I know it takes people with 
much more knowledge of handing these issues than I have to come up with 
reasonable solutions or countermeasures.

Please send me any suggestions or criticisms. Even something like "Hey 
dipshit, you are completely wrong about <this part>; every idiot knows it 
works like <this>" helps, although I'd rather see something a bit more 
refined.

Thanks,

Sean McPherson
seanmcp@hotmail.com

A Possible Follow-Up to Code Red
July 31, 2001
Sean McPherson
seanmcp@hotmail.com

Description and Motivation for DoS

When a cracker is attempting to cause a disruption on the Internet, he first 
determines his desired result, then builds or downloads tools to let him 
reach the desired outcome (in effect, taking the primary target offline or 
disrupting their level of service), and deploys the attack. The cracker may 
also possibly choose secondary targets to serve as either collateral damage 
or as a distraction from the primary target.

All scenarios relating to a Denial of Service (DoS) style of attack 
ultimately break down to the location of bottlenecks; these bottlenecks can 
be a single server (either through over-utilization or compromise of the 
hardware, operating system, or piece of software), a single internet 
connection (whether it is a single upstream carrier or all network paths 
routing to the target), or the Internet backbone itself.

Recent History of a Dos Worm

The recent Code Red worm targeted a specific server through the attempted 
over-utilization of available network bandwidth (network flooding). The worm 
was designed to infect as many hosts as possible for a period of time 
leading up to a pre-determined 'trigger' of midnight GMT between July 19th 
and July 20th. When the infected hosts' system time reached the trigger 
time, they were to attempt to connect to the IP address that had been 
allocated for www.whitehouse.gov, and, after connecting successfully, begin 
sending very large streams of data to the server. A CAIDA study shows that a 
minimum of 359,000 servers were known to be infected and active during a 
14-hour period on July 19th/20th.

The attack failed to have the expected impact for three main reasons: the 
worm code was written with the IP for www.whitehouse.gov hard coded (making 
it simple for the whitehouse.gov administrators to move the site to another 
IP and for the major backbone providers to configure their equipment to drop 
traffic destined for that IP), and with a requirement that the attacking 
host make a successful connection to the target IP before sending the large 
quantities of data (since the web site had been moved, no successful 
connections could be made, preventing the streams from ever being 
initiated), and finally, the worm had a preset cutoff time to stop 
attacking.

The code could have been much more devastating if the worm had been written 
with a different target determination logic, with a better-planned attack 
sequence, and no predetermined cutoff time. If the target had been known by 
web address, moving the site to a new IP would not have prevented attacks 
from finding it, and black holing web access to the site to prevent the huge 
flows of illegitimate traffic would have prevented legitimate traffic from 
reaching the site, taking the site offline more effectively than the attack 
itself.


Possible Evolution for the Next DoS Worm

The effectiveness of this worm's infection routine shows how quickly another 
worm might spread, and the question then becomes the effectiveness of the 
payload. Using the numbers from the CAIDA study, assume a future attack on 
IIS servers could infect a minimum of 300,000 servers. The majority of 
servers running IIS are on higher-speed internet connections, as they are 
often located in data centers, in offices with high-speed access, or on home 
connections such as cable or DSL lines. If each of these 300,000 machines 
had an available bandwidth of 256k/s, the total available bandwidth for use 
in an attack by this array of infected machines could reach 75 Gb/sec.

If a cracker's next target isn't a single site, but rather is affecting the 
performance of the internet as a whole, using 'Code Red' as a code-base 
would still speed his production time. Assume another, similar, IIS bug 
appears, but this time the cracker has more specific goals: Target two 
vastly different but critical portions of the internet, and attempt to make 
the attack as self-sustaining as possible. The primary targets would be the 
internet peering points between backbone providers (specifically a series of 
key routers), with secondary targets of the crackers' choosing (possibly the 
root servers of the domain name system, as any attacks on these servers has 
the potential to affect almost all traffic on the internet).

Breakdown of Potential Attack Sequence (Router CPU Over-utilization)

* Between infection time and a pre-determined trigger time, an infected host 
would attempt to spread the worm to as many other random IP addresses as 
possible.

* When the trigger time is reached, the worm would load an initial array of 
targets, each with a priority level for later use.

* This target would be attacked with a large stream of 'invalid' packets, 
specifically designed to cause excess utilization of a router's processor, 
for a predetermined period of time.

* When this period of time is over, check to see if the target is responding 
to any valid requests. If so, repeat the attack. If not, and this was the 
first target in the list, skip to the next host on the list. If this wasn't 
the first target on the list, go back one target and see if it is responding 
to requests. If not, skip to the next host on the list.

* If the list of available targets to attacks drops below 5 entries, check 
the priority level of targets from the initial list. Attack the remaining 
targets with the highest priority code first, to ensure the maximum number 
of infected servers target a single host.


Another Possible Scenario (Cross-Transit Peering)
* Between infection time and a pre-determined trigger time, an infected host 
would attempt to spread the worm to as many other random IP addresses as 
possible.

* When the trigger time is reached, the worm would perform a network trace 
to attempt to each of 100 hosts stored in an initial array of targets, each 
with a priority level for later use.

* The worm would then sort the trace results by the number of network hops, 
using the server the farthest number of hops away as the first target.

* This target would be attacked with a large stream of data for a 
predetermined period of time.

* When this period of time is over, check to see if the target is responding 
to any valid requests. If so, repeat the attack. If not, and this was the 
first target in the list, skip to the next host on the list. If this wasn't 
the first target on the list, go back one target and see if it is responding 
to requests. If not, skip to the next host on the list.

* If the list of available targets to attacks drops below 5 entries, or all 
targets are within 1 transit hop, check the priority level of targets from 
the initial list. Attack the servers with the highest priority code first, 
to ensure the maximum number of infected servers target a single host.

Effects

The general effect of the attacks would be a large quantity of traffic 
streaming through or targeted at the peering points. In the first attack, 
Internet usage is directly affected by the routers becoming so saturated 
with difficult-to-route packets that legitimate traffic is slowed. In the 
second, the effect is caused as the infected servers force as much traffic 
as possible across as many networks as possible. Also, as networks block the 
targets' IP addresses, more bandwidth is directed at the remaining hosts 
from the attack list once the target is checked for a valid response. With 
such a high initial number of targets, the attacking bandwidth for each 
would be comparatively low (and possibly lost in the noise of normal traffic 
on larger backbones, meaning smaller networks might be the first to put 
filters into place) but as each successive target is either moved or the 
traffic is blocked, the attacking machines would simply move down their list 
forcing increasing traffic across the major peering points. Also, the 
addition of a large number of hosts means that a large number of IP 
addresses or hostnames would be affected, and either be black-holed or be 
impaired for the duration of the attack. Using core routers as targets would 
make it difficult for network administrators to black hole the IP addresses 
of the routers, as all of the routers' neighbors and peering partners are 
configured to contact these targets for routing table updates. The lack of a 
predetermined cutoff time would cause the attack to sustain at some level 
regardless of filters in place until such time as a significant number of 
affected servers were patched. Using a dormancy period could make many 
administrators or owners believe, incorrectly, that their servers had been 
patched when the outbound attacks stop.

Potential Targets

One of the most vulnerable points of the internet is the distributed Domain 
Name Root Server network. Almost all systems on the internet communicate by 
using DNS resolution to locate other machines, and many queries initiate at 
the Root Servers. Microsoft OS's commonly use the GTLD-SERVERS.NET series of 
root Domain Name Servers, while many Unix and Linux/*BSD systems use the 
ROOT-SERVERS.NET series. Attacking these hosts as secondary targets for 
collateral damage in a distributed denial of service attack would have an 
immediate effect on the functionality and stability of the Internet, which 
could in fact overshadow that of the effect on the peering points. Also, 
these servers are under constant attack, and initial attacks by infected 
hosts could get 'lost in the noise', preventing the otherwise potentially 
swift discovery of the worm.

Other Possible Evolutionary Traits

* Raw/Custom Packets The next worm might include other logic and attack 
methods to increase the difficulty of stopping attacks or locating infected 
hosts. If the worm included the ability to craft custom or raw packets, and 
thus spoof the originating IP, the targets' administrators would have to 
trace the location of each infected host individually though the network, or 
else depend on each site of an infected host to locate the hosts on their 
network and shut them down.
* Sleep Period If each worm is designed to attack for a certain period of 
time and then go dormant before attacking at a later date, the infected 
hosts would be difficult to detect once the second 'go to sleep' trigger 
point was reached. Having this trigger point be based on the number of hosts 
available to be attacked would mean that each time targets were brought 
online after a storm, the worm could be re-awakened.
* Loadable Target Lists Some worms now include code designed to allow them 
to obtain new target lists, causing the infected hosts to initiate attacks 
on new targets after a dormancy period.



_________________________________________________________________
Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp

["code-red_evolution_writeup.txt" (text/plain)]

A Possible Follow-Up to Code Red
July 31, 2001
Sean McPherson
seanmcp@hotmail.com

Description and Motivation for DoS

When a cracker is attempting to cause a disruption on the Internet, he first 
determines his desired result, then builds or downloads tools to let him 
reach the desired outcome (in effect, taking the primary target offline or 
disrupting their level of service), and deploys the attack. The cracker may 
also possibly choose secondary targets to serve as either collateral damage 
or as a distraction from the primary target.

All scenarios relating to a Denial of Service (DoS) style of attack 
ultimately break down to the location of bottlenecks; these bottlenecks can 
be a single server (either through over-utilization or compromise of the 
hardware, operating system, or piece of software), a single internet 
connection (whether it is a single upstream carrier or all network paths 
routing to the target), or the Internet backbone itself.

Recent History of a Dos Worm

The recent Code Red worm targeted a specific server through the attempted 
over-utilization of available network bandwidth (network flooding). The worm 
was designed to infect as many hosts as possible for a period of time 
leading up to a pre-determined 'trigger' of midnight GMT between July 19th 
and July 20th. When the infected hosts' system time reached the trigger 
time, they were to attempt to connect to the IP address that had been 
allocated for www.whitehouse.gov, and, after connecting successfully, begin 
sending very large streams of data to the server. A CAIDA study shows that a 
minimum of 359,000 servers were known to be infected and active during a 
14-hour period on July 19th/20th.

The attack failed to have the expected impact for three main reasons: the 
worm code was written with the IP for www.whitehouse.gov hard coded (making 
it simple for the whitehouse.gov administrators to move the site to another 
IP and for the major backbone providers to configure their equipment to drop 
traffic destined for that IP), and with a requirement that the attacking 
host make a successful connection to the target IP before sending the large 
quantities of data (since the web site had been moved, no successful 
connections could be made, preventing the streams from ever being 
initiated), and finally, the worm had a preset cutoff time to stop 
attacking.

The code could have been much more devastating if the worm had been written 
with a different target determination logic, with a better-planned attack 
sequence, and no predetermined cutoff time. If the target had been known by 
web address, moving the site to a new IP would not have prevented attacks 
from finding it, and black holing web access to the site to prevent the huge 
flows of illegitimate traffic would have prevented legitimate traffic from 
reaching the site, taking the site offline more effectively than the attack 
itself.


Possible Evolution for the Next DoS Worm

The effectiveness of this worm's infection routine shows how quickly another 
worm might spread, and the question then becomes the effectiveness of the 
payload. Using the numbers from the CAIDA study, assume a future attack on 
IIS servers could infect a minimum of 300,000 servers. The majority of 
servers running IIS are on higher-speed internet connections, as they are 
often located in data centers, in offices with high-speed access, or on home 
connections such as cable or DSL lines. If each of these 300,000 machines 
had an available bandwidth of 256k/s, the total available bandwidth for use 
in an attack by this array of infected machines could reach 75 Gb/sec.

If a cracker's next target isn't a single site, but rather is affecting the 
performance of the internet as a whole, using 'Code Red' as a code-base 
would still speed his production time. Assume another, similar, IIS bug 
appears, but this time the cracker has more specific goals: Target two 
vastly different but critical portions of the internet, and attempt to make 
the attack as self-sustaining as possible. The primary targets would be the 
internet peering points between backbone providers (specifically a series of 
key routers), with secondary targets of the crackers' choosing (possibly the 
root servers of the domain name system, as any attacks on these servers has 
the potential to affect almost all traffic on the internet).

Breakdown of Potential Attack Sequence (Router CPU Over-utilization)

* Between infection time and a pre-determined trigger time, an infected host 
would attempt to spread the worm to as many other random IP addresses as 
possible.

* When the trigger time is reached, the worm would load an initial array of 
targets, each with a priority level for later use.

* This target would be attacked with a large stream of 'invalid' packets, 
specifically designed to cause excess utilization of a router's processor, 
for a predetermined period of time.

* When this period of time is over, check to see if the target is responding 
to any valid requests. If so, repeat the attack. If not, and this was the 
first target in the list, skip to the next host on the list. If this wasn't 
the first target on the list, go back one target and see if it is responding 
to requests. If not, skip to the next host on the list.

* If the list of available targets to attacks drops below 5 entries, check 
the priority level of targets from the initial list. Attack the remaining 
targets with the highest priority code first, to ensure the maximum number 
of infected servers target a single host.


Another Possible Scenario (Cross-Transit Peering)
* Between infection time and a pre-determined trigger time, an infected host 
would attempt to spread the worm to as many other random IP addresses as 
possible.

* When the trigger time is reached, the worm would perform a network trace 
to attempt to each of 100 hosts stored in an initial array of targets, each 
with a priority level for later use.

* The worm would then sort the trace results by the number of network hops, 
using the server the farthest number of hops away as the first target.

* This target would be attacked with a large stream of data for a 
predetermined period of time.

* When this period of time is over, check to see if the target is responding 
to any valid requests. If so, repeat the attack. If not, and this was the 
first target in the list, skip to the next host on the list. If this wasn't 
the first target on the list, go back one target and see if it is responding 
to requests. If not, skip to the next host on the list.

* If the list of available targets to attacks drops below 5 entries, or all 
targets are within 1 transit hop, check the priority level of targets from 
the initial list. Attack the servers with the highest priority code first, 
to ensure the maximum number of infected servers target a single host.

Effects

The general effect of the attacks would be a large quantity of traffic 
streaming through or targeted at the peering points. In the first attack, 
Internet usage is directly affected by the routers becoming so saturated 
with difficult-to-route packets that legitimate traffic is slowed. In the 
second, the effect is caused as the infected servers force as much traffic 
as possible across as many networks as possible. Also, as networks block the 
targets' IP addresses, more bandwidth is directed at the remaining hosts 
from the attack list once the target is checked for a valid response. With 
such a high initial number of targets, the attacking bandwidth for each 
would be comparatively low (and possibly lost in the noise of normal traffic 
on larger backbones, meaning smaller networks might be the first to put 
filters into place) but as each successive target is either moved or the 
traffic is blocked, the attacking machines would simply move down their list 
forcing increasing traffic across the major peering points. Also, the 
addition of a large number of hosts means that a large number of IP 
addresses or hostnames would be affected, and either be black-holed or be 
impaired for the duration of the attack. Using core routers as targets would 
make it difficult for network administrators to black hole the IP addresses 
of the routers, as all of the routers' neighbors and peering partners are 
configured to contact these targets for routing table updates. The lack of a 
predetermined cutoff time would cause the attack to sustain at some level 
regardless of filters in place until such time as a significant number of 
affected servers were patched. Using a dormancy period could make many 
administrators or owners believe, incorrectly, that their servers had been 
patched when the outbound attacks stop.

Potential Targets

One of the most vulnerable points of the internet is the distributed Domain 
Name Root Server network. Almost all systems on the internet communicate by 
using DNS resolution to locate other machines, and many queries initiate at 
the Root Servers. Microsoft OS's commonly use the GTLD-SERVERS.NET series of 
root Domain Name Servers, while many Unix and Linux/*BSD systems use the 
ROOT-SERVERS.NET series. Attacking these hosts as secondary targets for 
collateral damage in a distributed denial of service attack would have an 
immediate effect on the functionality and stability of the Internet, which 
could in fact overshadow that of the effect on the peering points. Also, 
these servers are under constant attack, and initial attacks by infected 
hosts could get 'lost in the noise', preventing the otherwise potentially 
swift discovery of the worm.

Other Possible Evolutionary Traits

* Raw/Custom Packets The next worm might include other logic and attack 
methods to increase the difficulty of stopping attacks or locating infected 
hosts. If the worm included the ability to craft custom or raw packets, and 
thus spoof the originating IP, the targets' administrators would have to 
trace the location of each infected host individually though the network, or 
else depend on each site of an infected host to locate the hosts on their 
network and shut them down.
* Sleep Period If each worm is designed to attack for a certain period of 
time and then go dormant before attacking at a later date, the infected 
hosts would be difficult to detect once the second 'go to sleep' trigger 
point was reached. Having this trigger point be based on the number of hosts 
available to be attacked would mean that each time targets were brought 
online after a storm, the worm could be re-awakened.
* Loadable Target Lists Some worms now include code designed to allow them 
to obtain new target lists, causing the infected hosts to initiate attacks 
on new targets after a dormancy period.




[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic