20-08-2014, 11:53 AM
SEMINAR REPORT ON SLAMMER WORM: THE FASTEST SPREADING BOMBSHELL ON THE INTERNET
SLAMMER WORM.doc (Size: 146 KB / Downloads: 17)
What is a computer virus?
Of primary concern is as to what a computer virus is. A virus is a computer program that by your help or by attaching itself to some other program is able to move from one computer to another. Typically these programs are often malicious rather than beneficial even if they have no payload associated with them as they snatch away the system resources. There are several classes of code that fall under the category “virus”. Not all of them are strictly virus in technical terms; some of them are Worms and Trojan horses.
What is a computer worm?
Worms are self replicating programs that do not infect other programs as viruses do; however they create copies of themselves which in turn create copies again, thus hogging the memory resources and clogging the network. Worms are usually seen on networks and multiprocessing OS’s.
Slammer Worm: A glance onto the facts
Slammer (sometimes called Sapphire) was the fastest computer worm in history. As it began spreading throughout the Internet, the worm infected more than 90 percent of vulnerable hosts within 10 minutes, causing significant disruption to financial, transportation, and government institutions and precluding any human-based response. In this seminar, I wish to describe how it achieved its rapid growth, dissect portions of the worm to study some of its flaws, and look at the defensive effectiveness against it and its successors.
Slammer began to infect hosts on Saturday, 25 January 2003, by exploiting buffer-overflow vulnerability in computers on the Internet running Microsoft's SQL Server or Microsoft SQL Server Desktop Engine (MSDE) 2000. David Litchfield of Next Generation Security Software discovered this underlying indexing service weakness in July 2002; Microsoft released a patch for the vulnerability before the vulnerability was publicly disclosed. Exploiting this vulnerability, the worm infected at least 75,000 hosts, perhaps considerably more, and caused network outages and unforeseen consequences such as
How Slammer chooses its victims
The worm's spreading strategy uses random scanning--it randomly selects IP addresses, eventually finding and infecting all susceptible hosts. Random-scanning worms initially spread exponentially, but their rapid new-host infection slows as the worms continually retry infected or immune addresses. Thus, as with the Code Red worm shown in Figure 2, Slammer's infected-host proportion follows a classic logistic form of initial exponential growth in a finite system. We label this growth behavior a random constant spread (RCS) model
Why Slammer was so fast?
While Slammer spread nearly two orders of magnitude faster than Code Red, it probably infected fewer machines. Both worms use the same basic scanning strategy to find vulnerable machines and transfer their exploitive payloads; however, they differ in their scanning constraints. While Code Red is latency-limited, Slammer is bandwidth-limited, enabling Slammer to scan as fast as a compromised computer can transmit packets or a network can deliver them. Slammer's 376 bytes comprise a simple, fast scanner. With its requisite headers, the payload becomes a single 404-byte user datagram protocol (UDP) packet. Contrast Slammer's 404 bytes with Code Red's 4 Kbytes or Nimda's 60 Kbytes.
Previous scanning worms, such as Code Red, spread via many threads, each invoking connect() to open a TCP session to random addresses. Consequently, each thread's scanning rate was limited by network latency. After sending a TCP SYN packet to initiate the connection, each thread must wait to receive a corresponding SYN/ACK packet from the target host or time-out if no response is received. During this time, the thread is blocked and cannot infect other hosts. In principle, worms can compensate for this latency by invoking a sufficiently large number of threads. In practice, however, operating system limitations, such as context-switch overhead and kernel stack memory consumption, limit the number of active threads a worm can use effectively. So, a worm like Code Red quickly stalls and becomes latency limited, as every thread spends most of its time waiting for responses.
In contrast, Slammer's scanner is limited by each compromised machine's Internet bandwidth. Because a single packet to UDP port 1434 could exploit the SQL server's vulnerability, the worm was able to broadcast scans without requiring responses from potential victims. Slammer's inner loop is very small, and with modern servers' I/O capacity to transmit network data at more than 100 Mbits per second, Slammer frequently was limited by Internet access bandwidth rather than its ability to replicate copies of itself.
How the Internet responded?
By passively monitoring traffic (either by sniffing or sampling packets or monitoring firewall logs) on a set of links providing connectivity to multiple networks, each responsible for about 65,000 IP addresses, the worm's overall scanning behavior over time was inferred.
The most-accurate Slammer early-progress data was obtained from the University of Wisconsin Advanced Internet Lab (WAIL), which logs all packet traffic into an otherwise unused network, a "tarpit" (see Figure 4). Because this data set represents a complete trace of all packets to an address space of known size, it lets us accurately extrapolate the worm's global spread. Unfortunately, a transient failure in data collection temporarily interrupted this data set approximately 2 minutes and 40 seconds after Slammer began to spread. Other sampled data sets are not sufficiently precise for accurate evaluation over short durations.
Why Slammer caused problems?
Although Slammer did not contain an explicitly malicious payload, there were widely reported incidences of disruption, including failures of Bellevue, Washington's 911 emergency's data-entry terminals and portions of Bank of America's ATM network. The 911 and ATM failures were widely reported: Inadvertent internal DoS attacks caused the large majority of these disruptions: as one or more infected machines sent out packets at their maximum possible rates. This traffic either saturated the first shared bottleneck or crashed some network equipment. The bottleneck effects are obvious, as a site's outgoing bandwidth is usually significantly less than a Slammer's instance can consume. Thus, the worm's packets saturated Internet links, effectively denying connectivity for all computers at many infected sites.
Equipment failures tended to be a consequence of Slammer's traffic patterns generated by infected machines, although any given equipment's failure details varied. Slammer's scanner produced a heavy load in three ways: a large traffic volume, lots of packets, and a large number of new destinations (including multicast addresses). We feel this combination probably caused most network-equipment failures by exhausting CPU or memory resources.
If attackers can control a few machines on a target network, they can perform a DoS attack on the entire local network by using a program that mimics Slammer's behavior. Because these are "normal" UDP packets, special privileges (such as root or system administrator abilities) are not required. Instead, they need only the ability to execute the attacker's program. Thus, critical networks should employ traffic shaping, fair queuing, or similar techniques, to prevent a few machines from monopolizing network resources
Who wrote Slammer?
After much study, very few clues were found about Slammer's author's identity, location, or motive. No one has claimed authorship and the codes do not contain identifying signatures or comments. There also are no variable names in the code, making it impossible to determine the author's native tongue. Similarly, the author has decent, but not remarkable, x86 coding skill; much of the code was borrowed from a published exploit, the additional code is not very complicated, and the author made three minor mistakes in the random-number generator. Finally, no one has discovered patient 0, the initial point of infection. Thus, it is impossible to determine where the worm was released, in hopes of tracing the release back to the author.
Conclusion
The above points do suggest that even if we take giant leap in the technological advancement and make virtually everything secure it’s always possible for human mind to explore beyond the unobvious. The firewalls and other security measures can be got away from; the safest site can be hacked; and even the most intricate encryption can be decrypted. Thus in this world of fictitious reality making anything invulnerable is not viable. The need however is to anticipate anything. The most sites hacked are government owned. Despite all this the facilities they provide are unquestionable. Security or no security the Internet will perpetuate and it’s users’ responsibility to fill his own system with added measures.
The time demands to learn from past mistakes because most of the malicious programmers use those codes that have been used previously; so to provide functions against them will undoubtedly be of help.
Lastly, to live in this crude world is impossible without being aware. To know that there can be thwarts and there have been thwarts certainly help. Only the curiosity toward knowledge can bring knowledge and only a beforehand preparation can make the threats less precarious. For, the quest for wisdom will never end and nor will bow down the thus produced monster. Attention is only needed for warding them off.