18-06-2014, 10:57 AM
Mobile ad-hoc network (MANET)
Mobile ad-hoc network.docx (Size: 538.1 KB / Downloads: 12)
Introduction
1.1Introduction
Mobile ad-hoc network (MANET) is a collection of mobile nodes, which form awireless network without the use of an existing infrastructure. MANET’s are usuallycharacterized by mobility of nodes and communication over multiple hops. Hence, MANET links are short lived and the topology is very dynamic.A mobile ad hoc network (MANET) is a decentralized, self-organizing and self-configuring wireless network, without any fixed infrastructure. In this network, each mobile node behaves not only as a host, but also as a router which is capable of communicating with other nodes, using either direct wireless links, or multi-hop wireless links. Examples of ad hoc network applications include mobile cellphones, 3G/4G internet connections on mobile devices, business associates sharing information during meetings or conferences, soldiers relaying information on a battlefield
3Congestion Management
Because of the bursty nature of voice/video/data traffic, sometimes the amount of traffic exceeds the speed of a link. At this point, what will the router do? Will it buffer traffic in a single queue and let the first packet in be the first packet out? Or, will it put packets into different queues and service certain queues more often? Congestion-management tools address these questions. Tools include priority queuing (PQ), custom queuing (CQ), weighted fair queuing (WFQ)[7], and class-based weighted fair queuing (CBWFQ).
FIFO: Basic Store-and-Forward Capability
In its simplest form, FIFO queuing involves storing packets when the network is congested andforwarding them in order of arrival when the network is no longer congested. FIFO is the defaultqueuing algorithm in some instances, thus requiring no configuration, but it has severalshortcomings. Most importantly, FIFO queuing makes no decision about packet priority; theorder of arrival determines bandwidth, promptness, and buffer allocation. Nor does it
PQ: Prioritizing Traffic
PQ ensures that important traffic gets the fastest handling at each point where it is used.It was designed to give strict priority to important traffic. Priority queuing can flexibly prioritizeaccording to network protocol (for example IP, IPX, or AppleTalk), incoming interface, packetsize, source/destination address, and so on. In PQ, each packet is placed in one of four queues—high, medium, normal, or low—based on an assigned priority. Packets that are not classified bythis priority list mechanism fall into the normal queue. During transmission, the algorithm giveshigher-priority queues absolute preferential treatment over low-priority queues. PQ is useful formaking sure that mission-critical traffic traversing various WAN links gets priority treatment
CQ: Guaranteeing Bandwidth
CQ was designed to allow various applications or organizations to share the network amongapplications with specific minimum bandwidth or latency requirements. In these environments,bandwidth must be shared proportionally between applications and users. You can use the CiscoCQ feature to provide guaranteed bandwidth at a potential congestion point, ensuring thespecified traffic a fixed portion of available bandwidth and leaving the remaining bandwidth toother traffic. Custom queuing handles traffic by assigning a specified amount of queue space toeach class of packets and then servicing the queues in a round-robin fashion.
Flow-Based WFQ: Creating Fairness among Flows
.3.5 Flow-Based WFQ: Creating Fairness among Flows
For situations in which it is desirable to provide consistent response time to heavy and lightnetwork users alike without adding excessive bandwidth, the solution is flow-based WFQ(commonly referred to as just WFQ). WFQ is one of Cisco's premier queuing techniques. It is aflow-based queuing algorithm that creates bit-wise fairness by allowing each queue to beserviced fairly in terms of byte count. For example, if queue 1 has 100-byte packets and queue 2has 50-byte packets, the WFQ algorithm will take two packets from queue 2 for every one packetfrom queue1. This makes service fair for each queue: 100 bytes each time the queue is serviced.
4Queue Management
Because queues are not of infinite size, they can fill and overflow. When a queue is full, any additional packets cannot get into the queue and will be dropped. This is a tail drop. The issue with tail drops is that the router cannot prevent this packet from being dropped (even if it is a high-priority packet). So, a mechanism is necessary to do two things:
1. Try to make sure that the queue does not fill up, so that there is room for high-priority packets
4 Structure of Real-time Traffic over Mobile Adhoc networks
Figure 1.1 briefly introduces the three main parts of voice/video service over an IPnetwork. Briefly, it is a process of transforming analog signals to digital signals,transmitting digital audio/video signals across an IP network in the form of packets,and the reassembly and playback of these signals at the receiver. Firstly, at the sender,the input audio/video signals are digitized and compressed by the encoder in units asframes, then frames are packetized to form the payload of a packet and the headers(e.g. IP/UDP/RTP headers) are added to the payload to form a packet. Next thepackets are transmitted over the network at regular intervals (the packet interval orPacketisation delay) and
Problem Definition
QoS requirements for multimedia applications are intrinsically different from simple network applications as multimedia services require more bandwidth and lossless delivery. Current QoS frameworks treat all real-time traffic in same manner but these networks must have some mechanism to differentiate traffic into different classes with different QoS provisions. In order to provide better quality of service to the multimedia applications in Mobile Ad Hoc Network (MANETs), where its resource is limited and changed dynamically, in this work we introduce an Active Queue management mechanism with autonomic attributes, named Autonomic Active Queue Management (AAQM).
Methodology
The methodology of the proposed approach can be summarized as follows:
This work introduces an autonomic attributes to queue management algorithm. So that it is called an autonomic active queue management (AAQM). This mechanism is capable of configuring and adjusting dynamically according to network and service context information.
In this AAQM, multimedia video packets are divided into several drop priority levels according to their service context, further taking into account the video compression characteristic information. These are considered as autonomic features. Because they are going to vary according to network environment. By considering these autonomic features it drops the packets having less priority and reduces the packet loss.
Thesis Outline
The complete thesis is outlined totally in six chapters.
The second chapter gives the complete details about the previously proposed queue management schemes for these multimedia applications over Manets.
The third chapter gives the complete details about the just previously proposed weighted random early detection algorithm for queue management
Quality of service
QoS is defined as a set of service requirements that needs to be met by the network while transporting a packet stream from a source to its destination This term refers to a measure of performance for a transmission system that reflects its transmission quality and availability ofservice. In computer networking this term Quality of Service (QoS) refers to the probability of the network meeting a given traffic contract, or in many cases is used informally to refer to the probability of a packet passing between two points in the network. The drive for QoS has become very strong in recent years because there has been a growth of multimedia traffic such as voice and video, that mixes it with more traditional data traffic such as FTP, Telnet and SMB. Applications such
Sensitive Traffic
Much data traffic has been very tolerant of delays and packet drops, however voice traffic hasdifferent characteristics. Voice is low volume but is intolerant of jitter (delay) and packet loss. Video is also intolerant of jitter and packet loss, plus it has the added complication of being very bursty at times. This convergence of multimedia traffic with traditional data traffic is set to grow and
2 Where QoS is applied
The most significant network bottlenecks exist at the Remote Access points, the WAN access, Internet access and the servers. Once a packet is on the core of the network, it generally deserves to be there, provided that your security is intact. Many of the technologies involved in QoS are to do with how packets are dealt with as they enter and leave a network because merely adding more bandwidth at the edge is only a short term solution that just resolves capacity and perhaps some congestion problems. Adding bandwidth does not resolve jitter or add any traffic prioritization features
Literature Review
With the widely applications of wireless devices andmultimedia technologies, it is expected to provide QoS tomultimedia applications in MANETs. However, the presentIP network is not capable to provide QoS to meet therequirements of real-time video flow. Packet losses haveserious impact on compressed video streams. Protecting thevideo data from packet loss is a widely discussed topic inQoS filed [1]. But it is difficult to restrict the packet loss to avery low level using current packet dropping controltechnologies. Moreover, even a very small packet loss maydamage video quality severely [2]. A main problem for videostream delivery in wireless network is how to protect theperceived video quality in spite of existing packet loss
2 Source Quench
3.2 Source Quench
Early descriptions of IP source quench messages suggest that routers could send source quench messages to hosts before the buffer space at the router reaches capacity and before packets have to be dropped at the router [17]. One proposal suggests that the router sends source quench messages when the queue size exceeds a certain threshold, and outlines a possible method for flow control at the source hosts in response to these messages [18]. The proposal also suggests that, when the router‟s queue size approaches maximum level, the router could discard arriving packets other than Internet Control Message Packet (ICMP) packets. In this method the router congestion control currently used is the source quench message of RFC792.When a gateway responds to congestion by dropping datagrams, it may send an ICMP source quench message to the source of the dropped datagram
First in First Out
In First In First Out (FIFO) method, during monitoring the average queue size at the gateway, and in an initial stage of congestion at the connections, it is assumed that the required queues at the gateway traffic from a number of connections is multiplexed together with FIFO scheduling. Not only is FIFO scheduling useful for sharing delay among connections and reducing delay for a particular connection during its period of burstiness, but it scales well and is easy to implement efficiently [19].
Active Queue Management
Active Queue Management (AQM) can improve the performance of TCP, and has beenrecommended by the Internet Engineering Task Force (IETF) for use in the routers of thenext generation Internet . The goal of AQM is three folds. First, to improve throughput byreducing the number of packets dropped. This is achieved by keeping the averagequeue length small in order to absorb naturally occurring bursts without dropping packets.Second, AQM provides low delay to interactive services by maintaining a small averagequeue length. Third, AQM avoids the lock out phenomenon arising from tail drop. Lock outphenomena results in a single connection (or a few flows) monopolizing the buffer space which,in turn, results in packets from other connections being dropped and causing unfairness inbandwidth sharing among connections
1 Random early detection
The idea behind RED is to provide, as soon as is possible, a feedback to responsive flows (likeTCP) before the queue overflows in an effort to indicate that congestion is inminent, instead ofwaiting until the congestion has become excessive. Also, packet drops are distributed more fairlyacross all flows.Random early detection (RED), also known as random early discard or random early drop is an active queue management algorithm. It is also a congestion avoidance algorithm. In the conventional tail drop algorithm, a router or other network component buffers as many packets as it can, and simply drops the ones it cannot buffer. If buffers are constantly full, the network is congested.
Weighted random early detection (WRED)
Weighted random early detection (WRED) is a queue management algorithm with congestion avoidance capabilities. It is an extension to random early detection (RED) where a single queue may have several different queue thresholds. Each queue threshold is associated to a particular traffic class.WRED is a congestion-avoidance mechanism. WRED combines the capabilities of the Random Early Detection (RED) algorithm with the IP Precedence feature to provide for preferential traffic handling of higher-priority packets. WRED can selectively discard lower-priority traffic when the interface begins to get congested and provide differentiated performance characteristics
Autonomic Active Queue Management
Autonomic features
In general, the QoS within a network aims to provide resource assurance and servicedifferentiation in a network. Based on this aim, some researchers think it is notnecessary to make fundamental changes to existing best-effort services and theunderlying Internet protocols. Their approach is to add more bandwidth andswitching capacity to provide satisfactory QoS over IP networks, (e.g. usingWavelength Division Multiplexing (WDM) to make transmission bandwidthabundant and inexpensive). Opponents of this point argue that additional bandwidthcan be very costly (it depends who owns it), and takes
3Queue management operation
The collecting operation of autonomic loop monitors thequeue length to estimate the network congestion situation.The average queue length and real time queue length areused as congestion metric. The interface queue of wirelessnode for video packets is a FIFO queue in our design. But inthe internal queue structure, packets are directly mapping tofive virtual queues(VQ) according to their priorityindexes(PI) recorded by the source end. Packet drop priorityincreases from VQ1 to VQ5. The algorithm uses the highestdrop priority dropping method. If packet dropping needed,the arriving packet will enter into its corresponding virtualqueue, and the first packet of the highest drop priority virtualqueue will be dropped.Parameters in the RED and WRED queue managementalgorithms are static. The parameter maxp is set as a fixedvalue. But in our design
Result analysis
This chapter gives the complete illustration about the performance evaluation of the proposed approach. It also gives the comparative analysis for the proposed AAQM with the WRED by evaluation of PSNR for the recovered video using AAQM and WRED.In this part, we use simulation to compare theperformance of the proposed AAQM algorithm with WREDin Manet’s scenario. The queue management algorithm isapplied on the wireless interface queue.
Conclusions
In this work a new queue management scheme is proposed to improve the multimedia flow delivery quality in mobile Adhoc networks. For such purpose we introduced the autonomic attributes to queue management algorithm. Themechanism is capable of configuring and adjustingdynamically according to network and service contextinformation. In AAQM, multimedia video packets aredivided into several drop priority levels according to theirservice context, further taking into account the videocompression characteristic information. The simulationperformed in Mat lab is compared the performance of the proposed AAQM withWRED when transmitting MPEG4 formatted video flow.The result shows that AAQM can protect important videopackets and reduce the impact of packet loss on video qualityeffectively