22-05-2013, 02:37 PM
A NOVEL APPROACH FOR LOAD BALANCING IN CLOUD COMPUTING
SYSTEMS
A NOVEL APPROACH.pdf (Size: 630.41 KB / Downloads: 37)
ABSTRACT
Cloud Computing is a computing model, not a technology. In
this model customers plug into the cloud to access IT
resources which are priced and provided on-demand. A cloud
consists of several elements such as clients, datacenter and
distributed servers. It includes fault tolerance, high
availability, scalability, flexibility, reduced overhead for
users, reduced cost of ownership, on demand services etc.
Central to these issues lies the establishment of an effective
load balancing algorithm. The load can be CPU load, memory
capacity, delay or network load. In clouds, load balancing, as
a method, is applied across different data centers to ensure the
network availability by minimizing use of computer hardware,
software failures and mitigating recourse limitations.
Load balancing is a core and challenging issue in
Cloud Computing. And it is a process of distributing the load
among various nodes of a distributed system to improve both
resource utilization and job response time while also avoiding
a situation where some of the nodes are heavily loaded while
other nodes are idle or doing very little work. Load balancing
ensures that all the processor in the system or every node in
the network does approximately the equal amount of work at
any instant of time. This technique can be sender initiated,
receiver initiated or symmetric type (combination of sender
initiated and receiver initiated types). Our objective is to
develop an novel load balancing algorithm using Divisible
load scheduling theorem to maximize or minimize different
performance parameters (throughput, latency for example) for
the clouds of different sizes (virtual topology depending on
the application requirement).
INTRODUCTION
Cloud computing is a new computing paradigm that, just as
electricity was firstly generated at home and evolved to be
supplied from a few utility providers, aims to transform
computing into an utility. It is being forecasted that more and
more users will rent computing as a service, moving the
processing power and storage to centralized infrastructures
rather than located in client hardware
Datacenter
Datacenter is a collection of servers hosting different
applications. A end user connects to the datacenter to
subscribe different applications. A datacenter may exist at a
large distance from the clients.Now-a-days a concept called
virtualization is used to install a software that allow multiple
instances of virtual server applications.
Distributed Servers
Distributed servers are the parts of a cloud which are present
throughout the Internet hosting different applications. But
while using the application from the cloud, the user will feel
that he is using this application from its own machine.
CLOUD COMPUTING SERVICE
MODELS
Cloud computing can be classified by the model of service it
offers into one of three different groups. These will be
described using the XaaS taxonomy, first used by Scott
Maxwell in 2006, where “X” is Software, Platform, or
Infrastructure, and the final "S" is for Service. It is important
to note, as shown in the following Figure, that SaaS is built
on PaaS, and the latter on IaaS. Hence, this is not an
excluding approach to classification, but rather it concerns
the level of the service provided. Each of these service
models is described in a following subsection.
LOAD BALANCING
It is a process of reassigning the total load to the individual
nodes of the collective system to make resource utilization
effective and to improve the response time of the job,
simultaneously removing a condition in which some of the
nodes are over loaded while some others are under loaded. A
load balancing algorithm which is dynamic in nature does not
consider the previous state or behavior of the system, that is, it
depends on the present behavior of the system. The important
things to consider while developing such algorithm are :
estimation of load, comparison of load, stability of different
system, performance of system, interaction between the
nodes, nature of work to be transferred, selecting of nodes and
many other ones [4] . This load considered can be in terms of
CPU load, amount of memory used, delay or Network load.
DIVISIBLE LOAD SCHEDULING
THEORY IN CLOUDS
Divisible load scheduling theory (DLT) in case of clouds is an
optimal division of loads among a number of master
computers, slave computers and their communication links.
Our objective is to obtain a minimal partition of the
processing load of a cloud connected via different
communication links such that the entire load can be
distributed and processed in the shortest possible amount of
time [9].
The whole Internet can be viewed as a cloud of many
connection-less and connection oriented services. The concept
of load balancing in Wireless sensor networks (WSN)
proposed in [9] can also be applied to clouds as WSN is
analogous to a cloud having no. of master computers (Servers)
and no. of slave computers(Clients). The slave computers are
assumed to have a certain measurement capacity. We assume
that computation will be done by the master computers, once
all the measured data is gathered from corresponding slave
computers. Only the measurement and communication times
of the slave computers are considered and the computation
time of the slave computers is neglected. Here we consider
both heterogeneous and homogeneous clouds. That is the
cloud elements may possess different measurement capacities,
and communication link speeds or the same measurement
capacities, and communication link speeds. One slave
computer may be connected to one or more master computers
at a certain instant of time. In DLT in case of clouds, an
arbitrarily divisible load without having any previous relations
is divided and first distributed among the various master
computers (for simplicity here the load is divided equally
between the master computers) and the each master computer
distributes the load among the corresponding slave computers
so that the entire load can be processed in shortest possible
amount of time.
CONCLUSION
we have discussed on basic concepts of Cloud Computing and
Load balancing and studied some existing load balancing
algorithms, which can be applied to clouds. In addition to that,
the closed-form solutions for minimum measurement and
reporting time for single level tree networks with different
load balancing strategies were also studied.