Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: ON WIRELESS SCHEDULING ALGORITHMS FOR MINIMIZING THE QUEUE-OVERFLOW PROBABILITY
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Domain
NETWORKING

Technology
JAVA

Abstract:

ON WIRELESS SCHEDULING ALGORITHMS FOR MINIMIZING THE QUEUE-OVERFLOW PROBABILITY

In this paper, we are interested in wireless scheduling algorithms for the downlink of a single cell that can minimize the queue-overflow probability. Specifically, in a large-deviation setting, we are interested in algorithms that maximize the asymptotic decay-rate of the queue-overflow probability, as the queue-overflow threshold approaches infinity. We first derive an upper bound on the decay-rate of the queue-overflow probability over all scheduling policies. We then focus on a class of scheduling algorithms collectively referred to as the α-algorithms. For a given α >= 1, the -algorithm picks the user for service at each time that has the largest product of the transmission rate multiplied by the backlog raised to the power. We show that when the overflow metric is appropriately modified, the minimum-cost-to-overflow under the -algorithm can be achieved by a simple linear path, and it can be written as the solution of a vector-optimization problem. Using this structural property, we then show that when a approaches infinity, the α-algorithms asymptotically achieve the largest decay-rate of the queueover flow probability. Finally, this result enables us to design scheduling algorithms that are both close-to-optimal in terms of the asymptotic decay-rate of the overflow probability, and empirically shown to maintain small queue-overflow probabilities over queue-length ranges of practical interest.
Abstract
In this paper, we are interested in wireless schedulingalgorithms for the downlink of a single cell that can minimizethe queue-overflow probability. Specifically, in a large-deviationsetting, we are interested in algorithms that maximize theasymptotic decay-rate of the queue-overflow probability, as thequeue-overflow threshold approaches infinity. We first derive anupper bound on the decay-rate of the queue-overflow probabilityover all scheduling policies.We then focus on a class of schedulingalgorithms collectively referred to as the “_-algorithms.” Fora given _ _ 1, the _-algorithm picks the user for service ateach time that has the largest product of the transmission ratemultiplied by the backlog raised to the power _. We showthat when the overflow metric is appropriately modified, theminimum-cost-to-overflow under the _-algorithm can be achievedby a simple linear path, and it can be written as the solution of avector-optimization problem. Using this structural property, wethen show that when _ approaches infinity, the _-algorithmsasymptotically achieve the largest decay-rate of the queueoverflowprobability. Finally, this result enables us to designscheduling algorithms that are both close-to-optimal in termsof the asymptotic decay-rate of the overflow probability, andempirically shown to maintain small queue-overflow probabilitiesover queue-length ranges of practical interest.
Index Terms—Queue-Overflow Probability, Wireless Scheduling,Large Deviations, Asymptotically Optimal Algorithms, CellularSystem.
I. INTRODUCTION
Link scheduling is an important functionality in wirelessnetworks due to both the shared nature of the wireless mediumand the variations of the wireless channel over time. In thepast, it has been demonstrated that, by carefully choosingthe scheduling decision based on the channel state and/orthe demand of the users, the system performance can besubstantially improved (see, e.g., the references in [2]). Moststudies of scheduling algorithms have focused on optimizingthe long-term average throughput of the users, or in otherwords stability. Consider the downlink of a single cell in acellular network. The base-station transmits to N users. Thereis a queue Qi associated with each user i = 1, 2, ...,N. Due tointerference, at any given time the base-station can only servethe queue of one user. Hence, this system can be modelled asa single server serving N queues. Assume that data for user I arrives at the base-station at a constant rate λi. Further, assumea slotted model, and in each time-slot the wireless channelcan be in one of M states. In each state m = 1, 2, ...,M,if the base-station picks user i to serve, the correspondingservice rate is Fim. Hence, at each time-slot Qi increasesby λi, and if it is served and the channel is at state m, Qidecreases by Fim. We assume that perfect channel informationis available at the base-station. In a stability problem [3]–[5],the goal is to find algorithms for scheduling the transmissionssuch that the queues are stabilized at given offered loads. Animportant result along this direction is the development of theso-called “throughput-optimal” algorithms [3]. A schedulingalgorithm is called throughput-optimal if, at any offered loadunder which any other algorithm can stabilize the system,this algorithm can stabilize the system as well. It is wellknownthat the following class of scheduling algorithms arethroughput-optimal [3]–[5]: For a given α ≥ 1, the basestationpicks the user for service at each time that has thelargest product of the transmission rate multiplied by thebacklog raised to the power α. In other words, if the channel isin state m, the base-station chooses the user i with the largest(Qi)αFim. To emphasize the dependency on α, in the sequelwe will refer to this class of throughput-optimal algorithms asα-algorithms.While stability is an important first-order metric of success,for many delay-sensitive applications it is far from sufficient.In this paper, we are interested in the probability of queueoverflow, which is equivalent to the delay-violation probabilityunder certain conditions. The question that we attempt toanswer is the following: Is there an optimal algorithm in thesense that, at any given offered load, the algorithm can achievethe smallest probability that any queue overflows, i.e., thesmallest value of P[max1≤i≤N Qi(T ) ≥ B], where B is theoverflow threshold. Note that if we impose a quality-of-service(QoS) constraint on each user in the form of an upper boundon the queue-overflow probability, then the above optimalitycondition will also imply that the algorithm can support thelargest set of offered loads subject to the QoS constraint.Unfortunately, calculating the exact queue-distribution isoften mathematically intractable. In this paper, we use largedeviationtheory [11], [12] and reformulate the QoS constraintin terms of the asymptotic decay-rate of the queue-overflowprobability as B approaches infinity.


Download full report
http://cobweb.ecn.purdue.edu/~linx/paper/ton10-ldp.pdf