23-07-2012, 10:55 AM
Dynamic Resource Allocation
Dynamic resource allocation.pptx (Size: 241.07 KB / Downloads: 47)
ABSTRACT
In recent years ad-hoc parallel data processing has emerged to be one of the killer applications for Infrastructure-as-a-Service (IaaS) clouds.
Major Cloud computing companies have started to integrate frameworks for parallel data processing in their product portfolio, making it easy for customers to access these services and to deploy their programs.
However, the processing frameworks which are currently used have been designed for static, homogeneous cluster setups and disregard the particular nature of a cloud.
INTRODUCTION
EXISTING SYSTEM:
A growing number of companies have to process huge amounts of data in a cost-efficient manner.
Classic representatives for these companies are operators of Internet search engines.
The vast amount of data they have to deal with every day has made traditional database solutions prohibitively.
PROPOSED SYSTEM:
In recent years a variety of systems to facilitate MTC has been developed.
Although these systems typically share common goals (e.g. to hide issues of parallelism or fault tolerance), they aim at different fields of application.
MapReduce is designed to run data analysis jobs on a large amount of data, which is expected to be stored across a large set of share-nothing commodity servers.
NETWORK MODULE:
Server - Client computing or networking is a distributed application architecture that partitions tasks or workloads between service providers (servers) and service requesters, called clients.
Often clients and servers operate over a computer network on separate hardware.
A server machine is a high-performance host that is running one or more server programs which share its resources with clients.
LBS SERVICES:
In particular, users are reluctant to use LBSs, since revealing their position may link to their identity.
Even though a user may create a fake ID to access the service, her location alone may disclose her actual identity.
Linking a position to an individual is possible by various means,such as publicly available information city maps.
SYSTEM MODEL:
We propose an edge ordering anonymization approach for users in road networks, which guarantees K-anonymity under the strict reciprocity requirement (described later).
We identify the crucial concept of border nodes, an important indicator of the CS size and of the query processing cost at the LS.
We consider various edge orderings, and qualitatively assess their query performance based on border nodes.
SCHEDULED TASK:
Recently, considerable research interest has focused on preventing identity inference in location-based services.
Proposing spatial cloaking techniques. In the following, we describe existing techniques for ASR computation (at the AZ) and query processing (at the LS).
At the end, we cover alternative location privacy approaches and discuss why they are inappropriate to our problem setting.
QUERY PROCESSING:
Processing is based on implementation of the theorem uses (network-based) search operations as off the shelf building blocks.
Thus, the NAP query evaluation methodology is readily deployable on existing systems, and can be easily adapted to different network storage schemes.
CONCLUSION
In this paper we have discussed the challenges and opportunities for efficient parallel data processing in cloud environments and presented Nephele.
The first data processing framework to exploit the dynamic resource provisioning offered by today’s IaaS clouds.
In particular, we are interested in improving Nephele’s ability to adapt to resource overload or underutilization during the job execution automatically.