22-09-2014, 02:07 PM
Effective Scheduling in Cloud Computing is a Risk?
Effective Scheduling.pdf (Size: 477.31 KB / Downloads: 18)
Abstract
Cloud computing has attracted attention as an important platform for software deployment, with perceived
benefits such as elasticity to fluctuating load, and reduced operational costs compared to running in enterprise data
centers. While some software is written from scratch especially for the cloud, many organizations also wish to migrate
existing applications to a cloud platform. A cloud environment is one of the most shareable environments where
multiple clients are connected to the common environment to access the services and the products. A cloud
environment can be public or the private cloud. In such environment, all the resources are available on an integrated
environment where multiple users can perform the request at same time. In such case, some approach is required to
perform the effective scheduling and the resource allocation. The situation become critical when the cloud server is
overloaded, in such case to provide the effective service to the client, the process migration is done. The migration is
the concept to switch the allocated process to some other virtual machine or the cloud to release the load and to
perform the effective execution of cloud requests. The presented work is in same direction. We have taken a cloud
environment with multiple clouds along with multiple virtual machines. All the machines are homogenous. These all
clouds are assigned by a specific priority. We are proposing an algorithm by using migration strategy in this paper.
INTRODUCTION
In cloud computing, virtual machine technology plays an important role for server consolidation. Physical computing
resources are efficiently managed in enormous datacenters of service providers, and virtualized computing resources are
offered for remote customers in a pay-per-use manner. IaaS (Infrastructure-as-a-Service) providers need to run
customers‟ VM as much as possible to fully utilize their datacenter capacity; increasing datacenter utilization is the key
to success for a datacenter business
Opportunities and Challenges [2]
It enables services to be used without any understanding of their infrastructure.
Data and services are stored remotely but accessible from „anywhere‟.
Use of cloud computing means dependence on others and that could possibly limit flexibility and innovation.
Security could prove to be a big issue. It is still unclear how safe outsourced data is and when using these
services ownership of data is not always clear.
Cloud computing means different to different people, its benefits are different to different people. To IT managers, it
means to minimize capital-expenditure by outsourcing most of the hardware and software resources. To ISVs, it means to
reach out to more users by offering a SaaS solution. To end users, it means to access an application from anywhere
using any device. In cloud computing, the relationship between the user and machine are many-to-many. Many users can
access an application tha t is served from many machines. Now, what was the reason of this evolution? What were the
driving factors behind this? The reason for the evolution from PC-based application to Internet -based application was
obvious. This happened because of the need of multiple users trying to access an application from their own machines
[3]. The only way that it was possible was to have the application hosted on a central server and having separate client
applications communicate to it. The evolution from internet-based applications to cloud computing, I think, is a bit more
complex. There are several industry trends and user behaviors affecting this shift in the technology. We will get more
into those in my next blog. Here, I touch upon what I believ e is the biggest driving factor behind cloud computing
Literature Review
In Year 2009, Kento Sato performed a work," A Model-Based Algorithm for Optimizing I/O Intensive Applications in
Clouds using VM-Based Migration". Author propose a novel model-based I/O performance optimization algorithm for
data-intensive applications running on a virtual cluster, which determines virtual machine(VM) migration strategies and
the weighted edge represents a migration of a VM and time [4].
In Year 2010, Takahiro Hirofuchi performed a work," Enabling Instantaneous Relocation of Virtual Machines with a
Lightweight VMM Extension". In this paper, Author propose an advanced live migration mechanism enabling
instantaneous relocation of VMs. To minimize the time needed for switching the execution host, memory pages are
transferred after a VM resumes at a destination host. In addition, for memory intensive workloads, Presented migration
mechanism moved all the states of a VM faster than existing migration technology [5].
In Year 2011, Jia Rao performed a work," Self-Adaptive Provisioning of Virtualized Resources in Cloud Computing".
In this paper, Author propose a distributed learning mechanism that facilitates self-adaptive virtual machines resource
provisioning. Author treat cloud resource allocation as a distributed learning task, in which each VM being a highly
autonomous agent submits resource requests according to its own benefit. The mechanism evaluates the requests and
replies with feedbacks. Author develops a reinforcement learning algorithm with a highly efficient representation of
experiences as the heart of the VM side learning engine [6].
In Year 2012, Michael Menzel performed a work," CloudGenius: Decision Support for Web Server Cloud Migration".
Author present a framework (called CloudGenius) which automates the decision-making process based on a model and
factors speci_cally for Web server migration to the Cloud. Cloud- Genius leverages a well known multi-criteria decision
making technique, called Analytic Hierarchy Process, to automate the selection process based on a model, factors, and
QoS parameters related to an application. Author present an implementation of CloudGenius that has been validated
through experiments [7].
In Year 2013, Christina Delimitrou performed a work," Paragon: QoS-Aware Scheduling for Heterogeneous
Datacenters". Author present Paragon, an online and scalable DC scheduler that is heterogeneity and interference-aware.
Paragon is derived from robust analytical methods and instead of profiling each application in detail; it leverages
information the system already has about applications it has previously seen. It uses collaborative filtering techniques to
quickly and accurately classify an unknown, incoming workload with respect to heterogeneity and interference in
multiple shared resources, by identifying similarities to previously scheduled applications
SCHEDULING
Scheduling mechanism is the most important component of a computer system. Scheduling is the strategy by which the
system decides which task should be executed at any given time. There are following types of techniques used for service
allocation
Proposed Algorithm
There are different service providers that provide Cloud Services to different users for different business services. We are
proposing a middle layer Architecture called Intermediate Layer. In this concept we are placing a layer between the users
and the web services. The middle layer will accept the user requests and also monitor the cloud servers for the available
load over the services. The middle layer will perform the cloud allocation sequentially and if the service allocation is not
possible for a specific cloud, it will perform the migration of process from one cloud to other.