26-12-2012, 02:16 PM
can we implement esm efficient and scalable data center multicast routing with network simulator2(NS2)?
26-12-2012, 02:16 PM
can we implement esm efficient and scalable data center multicast routing with network simulator2(NS2)?
27-12-2012, 12:38 PM
to get information about the topic "esm efficient and scalable data center multicast routing" RELATED TOPIC REFER THE LINK BELLOW
https://seminarproject.net/Thread-secure...h-networks
10-05-2013, 03:01 PM
ESM: Efficient and Scalable Data Center Multicast Routing
ESM Efficient and Scalable.PDF (Size: 1.42 MB / Downloads: 37) Abstract Multicast benefits group communications in saving network traffic and improving application throughput, both of which are important for data center applications. However, the technical trend of data center design poses new challenges for efficient and scalable multicast routing. First, the densely connected networks make traditional receiver-driven multicast routing protocols inefficient in multicast tree formation. Second, it is quite difficult for the low-end switches widely used in data centers to hold the routing entries of massive multicast groups. In this paper, we propose ESM, an efficient and scalable multicast routing scheme for data center networks. ESM addresses the challenges above by exploiting the feature of modern data center networks. Based on the regular topology of data centers, ESM uses a source-to-receiver expansion approach to build efficient multicast trees, excluding many unnecessary intermediate switches used in receiver-driven multicast routing. INTRODUCTION GROUP communication widely exists in data centers hosting cloud computing [5], [20]. Multicast benefits group communications by both saving network traffic and improving application throughput. Though network-level multicast in the Internet bears a notorious reputation during the past two decades for deploying obstacles and for many open issues such as congestion control, pricing model, and security concern, recently there is a noticeable resurgence of it, e.g., the successful application of IP multicast in IPTV networks [6], enterprise networks, etc. The managed environment of data centers also provides a good opportunity for multicast deployment. However, existing multicast protocols built in data center switches/servers are primarily based on the multicast design for the Internet. Before the wide application of multicast in data centers, we need to carefully investigate whether these Internet-oriented multicast technologies can well accommodate data center networks. BACKGROUND AND DESIGN RATIONALE In this section, we discuss the background of data center multicast and design rationale. Data Center Multicast One-to-many group communication is common in modern data centers running cloud-based applications. Multicast is the natural technology to benefit this kind of communication pattern, for the purposes of both saving network bandwidth and reducing the load on the sender. For Web search services, the incoming user query is directed to a set of indexing servers to look up the matching documents [4]. Multicast can help accelerate the directing process and reduce the response time. Distributed file system is widely used in data centers, such as GFS [2] in Google, HDFS in Hadoop, and COSMOS in Microsoft. Files are divided into many fix-sized chunks, say, 64 or 100 MB. Each chunk is replicated to several copies and stored in servers located in different racks to improve the reliability. Chunk replication is usually bandwidth-hungry, and multicast-based replication can save the interrack bandwidth. In map-reduce like cooperative computations [3], the executive binary is delivered to the servers participating in the computation task before execution. Multicast can also speed up the binary delivery and reduce the task finish time. Data Center Network Architecture In current practice, data center servers are connected by a tree hierarchy of Ethernet switches, with commodity ones at the first level and increasingly larger and more expensive ones at higher levels. It is well known that this kind of tree structure suffers from many problems. The top-level switches are the bandwidth bottleneck, and high-end high-speed switches have to be used. Moreover, a high-level switch shows as a single-point failure spot for its subtree branch. Using redundant switches does not fundamentally solve the problem, but incurs even higher cost. Recently, there is a growing interest in the community to design new data center network architectures with high bisection bandwidth to replace the tree structure [7]–[11]. BCube and Fat-Tree are the representative ones among these architectures. Design Rationale Modern data center network architectures pose new challenges in the design of efficient and scalable multicast routing. First, the redundant links in data center networks make traditional receiver-driven multicast routing protocols inefficient in terms of the number of links of the formed multicast tree. Second, the routing table memory space in low-end commodity switches is quite limited, and it is challenging to support a large number of multicast groups. In densely connected data center networks, there are often a large number of tree candidates for a group. Given multiple equal-cost paths between servers/switches, it is undesirable to run traditional receiver-driven multicast routing protocols such as PIM for tree building because independent path selection by receivers can result in many unnecessary intermediate links. Without loss of generality Problem To eliminate the unnecessary switches used in independent receiver-driven multicast routing, ESM builds the multicast tree in a managed way. The group join/leave requests on receivers are directed to the multicast manager. The multicast manager then calculates the multicast tree based on the data center topology and group membership distribution. Data center topologies are regular graphs, and the multicast manager can easily maintain the topology information (with failure management). The problem is then translated as how to calculate an efficient multicast tree on the multicast manager. Note that building the Steiner tree in general graphs is NP-Hard. For recently proposed data center topologies, the multicast tree building in Fat-Tree is polynomial (the optimal multicast tree is determined by randomly selecting a core-level switch). However, we prove that the Steiner tree problem in BCube is also NP-Hard. Combination Routing Scheme It is challenging to support massive multicast groups using the traditional way of installing all the multicast routing entries into switches, especially on the low-end commodity switches commonly used in data centers. In-packet Bloom Filter is an alternative choice since it eliminates the requirement of in-switch entries. However, the in-packet Bloom Filter can cause bandwidth waste during multicast forwarding. Combination Routing Scheme We then evaluate the combination routing scheme by installing different numbers of multicast routing entries in the switches. To demonstrate the performance of the combination routing, the number of routing entries installed in each switch is less than the total number of groups in the network. The number of larger groups taking in-switch routing equals the maximum number of in-switch entries, while the other groups use in-packet Bloom-Filter-based routing. Implementation Our implementation of ESM includes three parts—namely, the multicast manager, the multicast library on servers, as well as the combination forwarding engine. The multicast manager is responsible for calculating the multicast tree, configuring the multicast routing rules on switches, and sending the tree information to the source server if in-packet Bloom-Filter-based multicast routing is used. The multicast library on servers redirects the multicast group join/leave requests to the multicast manager, inserts the Bloom Filter field into an outgoing packet header if required, and removes the Bloom Filter field from an incoming packet if it exists. Scalable Multicast Routing As one of the deploying obstacles of multicast in the Internet, scalable multicast routing has attracted much attention in the research community. For data center networks where low-end switches with limited hardware routing space is used, the problem is even more challenging. Bloom Filter can be used to compress in-switch multicast routing entries. In FRM [18], Bloom-Filter-based group information is maintained at border routers to help determine interdomain multicast packet forwarding. A similar idea is adopted in BUFFALO [19], though it is primarily designed for scalable unicast routing. In ESM, we use in-packet Bloom Filter to further save the multicast routing entries on switches. CONCLUSION In this paper, we design ESM, an efficient and scalable multicast routing scheme for modern data center networks. Given that receiver-driven multicast routing does not perform well in densely connected data center networks, ESM efficiently build the multicast tree by leveraging the multistage graph feature of data center networks. ESM combines both in-packet Bloom Filters and in-switch entries to make the tradeoff between the number of multicast groups supported and the additional bandwidth overhead, so as to achieve scalable routing. |
|