08-02-2013, 04:55 PM
Modeling and Automated Containment of Worms
Modeling and Automated Containment of Worms.docx (Size: 180.17 KB / Downloads: 25)
Abstract
Self-propagating codes, called worms, such as Code Red, Nimda, and Slammer, have drawn significant attention due to their enormously adverse impact on the Internet. Thus, there is great interest in the research community in modeling the spread of worms and in providing adequate defense mechanisms against them.
In this paper, we present a (stochastic) branching process model for characterizing the propagation of Internet worms.
• The model is developed for uniform scanning worms .
• Extended to preference scanning worms.
• Model leads to the development of an automatic worm containment strategy that prevents the spread of a worm beyond its early stage.
• Specifically, for uniform scanning worms, we are able to determine whether the worm spread will eventually stop.
• We then extend our results to contain uniform scanning worms.
• Our automatic worm containment schemes effectively contain both uniform scanning worms and local preference scanning worms.
• It is validated through simulations and real trace data to be non intrusive.
Introduction
The Internet has become critically important to the financial viability of the national and the global economy. Meanwhile, we are witnessing an upsurge in the incidents of malicious code in the form of computer viruses and worms. One class of such malicious code, known as random scanning worms, spreads itself without human intervention by using a scanning strategy to find vulnerable hosts to infect. Code Red, SQL Slammer, and Sasser are some of the more famous examples of worms that have caused considerable damage. Network worms have the potential to infect many vulnerable hosts on the Internet before human countermeasures take place. The aggressive scanning traffic generated by the infected hosts has caused network congestion, equipment failure, and blocking of physical facilities such as subway stations, 911 call centers, etc. As a representative example, consider the Code RedwormVersion 2 that exploited buffer overflow vulnerability in the Microsoft IISWebservers. It was released on19 July 2001 Andover a period of less than 14 hours infected more than 359,000 machines. The cost of the epidemic, including subsequent strains of Code Red, has been estimated by Computer Economics to be $2.6 billion.
• The goal of our research is to provide a model for the propagation of random scanning worms and the development of automatic containment mechanisms that prevent the spread of worms.
• This containment scheme is then extended to protect an enterprise network from a preference scanning worm.
• A host infected with random scanning worms finds and infects other vulnerable hosts by scanning a list of randomly generated IP addresses.
• Worms using other strategies to find vulnerable hosts to infect are not within the scope of this work.
• Some examples of nonrandom-scanning worms are e-mail worms, peer-to-peer worms, and worms that search the local host for addresses to scan.
Most models of Internet-scale worm propagation are based on
--deterministic epidemic models. ---applied when infected hosts large
--capture only expected or means behavior
--capture the no variability -- dramatic during the early phase of worm propagation.
--stochastic epidemic models can be used to model this early phase, they are generally too complex to provide useful analytical solutions.
In this paper, we propose a stochastic branching process model for the early phase ofwormpropagation
1.Weconsider the generation-wise evolution of worms, with the hosts that are infected at the beginning of the propagation forming generation zero. 2.The hosts that are directly infected by hosts in generation n are said to belong to generation n þ 1.
• Our model captures the worm spreading dynamics for worms of arbitrary scanning rate, including stealth worms that may turn themselves off at times.
• We show that it is the total number of scans that an infected host attempts, and not the more restrictive scanning rate, which determines whether worms can spread. Moreover, we can probabilistically bound the total number of infected hosts. These insights lead us to develop an automatic worm containment strategy. The main idea is to limit the total number of distinct IP addresses contacted (denote the limit as MC) per host over a period we call the containment cycle, which is of the order of weeks or months. We show that the value of MC does not need to be as carefully tuned as in the traditional rate control mechanisms. Further, we show that this scheme will have only marginal impact on the normal operation of the networks. Our scheme is fundamentally different from rate limiting schemes because we are not bounding instantaneous scanning rates. Preference scanning worms are a common class of worms but have received significantly less attention from the research community. Unlike uniform scanning worms, this type of worm prefers to scan random IP addresses in the local network to the overall Internet. We show that a direct application of the containment strategy for uniform scanning worms to the case of preference scanning worms makes the system too restrictive in terms of the number of allowable scans from a host. We therefore propose a local worm containment system based on restricting a host’s total number of scans to local unused IP addresses (denoted as N). We then use a stochastic branching process model to come up with a bound on the value of N to ensure that the worm spread is stopped.
The main contributions of the paper are summarized as follows: We provide a means to accurately model the early phase of propagation of uniform scanning worms. We also provide an equation that lets a system designer probabilistically bound the total number of infected hosts in a worm epidemic. The parameter that controls the spread is the number of allowable scans for any host. The insight from our model provides us with a mechanism for containing both fast-scanning worms and slow-scanning worms without knowing the worm signature in advance or needing to detect whether a host is infected. This scheme is non-intrusive in terms of its impact on legitimate traffic. Our model and containment scheme is validated through analysis, simulation, and real traffic statistics.
The rest of the paper is organized as follows: In Section 2, we review relevant research on network worms. In Section 3, we present our branching process model with corresponding analytical results on the spread of the infection. In Sections 4 and 5, we describe an automatic worm containment scheme for random scanning worms and adaptation to the case of local preference scanning worms. In Section 6, we provide
numerical results that validate our model and confirm the effectiveness of our containment scheme.
Literature Review
A computer worm is a self-replicating computer program. It uses a network to send copies of itself to other nodes (computer terminals on the network) and it may do so without any user intervention. Unlike a virus, it does not need to attach itself to an existing program. Worms almost always cause harm to the network, if only by consuming bandwidth, whereas viruses almost always corrupt or modify files on a targeted computer.