29-09-2016, 09:25 AM
Cloud Computing Network problem and storage solutions
Using Ant colony optimization algorithm
1456670302-CloudNPSUsingAntcolonyoptimization.docx (Size: 229.95 KB / Downloads: 3)
Abstract
Cloud computing is the most network-centric distributed computing, parallel computing and grid computing Where successful transition to Cloud will depend on a excel network foundation .One of the fundamental issues in this network environment and storage I related to task scheduling is flexible access and use of resources in network technology where service provider connected via Internet. Cloud computing eliminates the need of having a complete infrastructure of hardware and software to meet users requirements and applications. Cloud users to understand and design more of the network to which they are exposed .This paper aim to address most of this network problem and their possible solution using Ant Colony Optimization (ACO) algorithm.
Introduction
Cloud computing is a model in which IT resources and services are abstracted from the underlying infrastructure and provided on demand and at scale in a multi-tenant environment. From a networking standpoint, each service model requires the cloud provider to expose more or less of the network and provide more or fewer networking capabilities to cloud users. Conversely, each service model requires cloud users to understand and design more of the network to which they are exposed. The network is most exposed in the IaaS model and least in the SaaS model.The essential technological difference between the deployment models is derived from the networking relationship between the cloud user and the cloud provider. In a private cloud, the user and provider are within the same trusted network boundary. In a public cloud, they are on different networks. In a hybrid cloud, a secured connection may exist between the user's and provider's networks, or the user's network may extend into the provider's cloud (or the reverse). In a community cloud, the structure depends on the charter and architecture of the organizations operating the cloud.Every cloud is some combination of a service and deployment model. Regardless of the type of cloud, however, one fact remains true: no network means no cloud. Without networks, users cannot access their cloud services. Without networks, applications, data, and users cannot move between clouds. Without networks, the infrastructure components that must work together to create a cloud cannot.
There is an increasing proliferation of phones and other mobile devices that are being used to access applications and data from clouds across many different kinds of networks. Until recently, "mobile" mainly referred to these devices and the networks that support them. Now, however, with cloud infrastructure, applications and servers also have become mobile. able to move from one part of a cloud to another or even from one cloud to another. It is making the network aware of and accommodating to not just users accessing the cloud, but also the applications and data in the cloud. Extending and interconnecting clouds, enabling application and data and user mobility between clouds. Providing consistent quality of service across the entire network. Enforcing policies on devices, users, data, and applications regardless of location and making the policy Enforcement points themselves mobile. Providing a consistent policy infrastructure, centralized management, separation of duties, and the capability to deliver federated sign-on and policy enforcement across clouds. Creating self-service catalog, orchestration, and automation tools to provision all IT resources in the fabric Enabling the collection of metrics directly from the fabric for analysis and response. Working with the Open Stack group to create an open-source network-as-a-service (NaaS) capability for provisioning networking resources in open cloud environments.
2. Critical Network
Networking must change because the rise of cloud models is changing what is happening on the network. New infrastructure: For example, everything is becoming virtualized, infrastructure is becoming programmable, and servers and applications have mobility. New applications for example, data-intensive analytics, parallel and clustered processing, telemedicine, remote experts, and community cloud services. New access
For example, mobile device-based access to everything and virtual desktops. New traffic for example, predominantly server-to-server traffic patterns and location-independent endpoints on both sides of a service or transaction.
What we need to do with and to data has not changed. Data still needs to travel between the computing and storage components of an application and then to the user of the application. Security still must be applied to help make sure that the right users, devices, and systems have access to the right data at the right time while protecting against attacks, intrusions, breaches, and leaks. Different kinds of data and traffic have different levels of importance and network resource needs that still must be met across the entire network with quality-of-service (QoS) capabilities.
3 Cloudsim
Simulation is a technique where a program models the behavior of the system (CPU, network etc.,) by calculating the interaction between its different entities using mathematical formulas, or actually capturing and playing back observations from a production system. Cloudsim is a framework developed by the GRIDS laboratory of university of Melbourne which enables seamless modeling, simulation and experimenting on designing cloud computing infrastructures.
3.1 Cloudsim Characteristics
Cloudsim can be used to model datacenters, host, service brokers, scheduling and allocation policies of a
large scaled cloud platform. Hence, the researcher has used cloudsim to model datacenters, hosts, VMs for experimenting in simulated cloud environment. Cloud supports VM provisioning at two levels:
(i) At the host level: It is possible to specify how much of the overall processing power of each core will be assigned to each VM known as VM policy Allocation.
(ii) At the VM level: The VM assigns a fixed amount of the available processing power to the individual
application services (task units) that are hosted within its execution engine known as VM Scheduling.
In this paper, the ACO algorithm will be used for allocation of incoming batch jobs to VMs at the VM level (VM Scheduling). All the VMs in a data center not necessary have a fixed amount of processing power but, it can vary with different computing nodes, and then to these VMs of different processing powers, the tasks/ requests (application services) are assigned or allocated to the most powerful VM and then to the lowest and so on. Hence, the performance parameter such as overall makespan time is optimized (increasing resource utilization ratio) and the cost will be decreased.
4. Cloud Scheduling Based ACO
The basic idea of ACO is to simulate the foraging behavior of ant colonies. When an ants group tries to search for the food, they use a special kind of chemical to communicate with each other. That chemical is referred to as pheromone. Initially, ants starts search their foods randomly. Once the ants find a path to food source, they leave pheromone on the path. An ant can follow the trails of the other ants to the food source by sensing pheromone on the ground. As this process continues, most of the ants attract to choose the shortest path as there have been a huge amount of pheromones accumulated on this path. The advantages of the algorithm are the use of the positive feedback mechanism, inner parallelism and extensible. The disadvantages are overhead and the stagnation phenomenon, or searching for to a certain extent, all individuals found the same solution exactly, can’t further search for the solution space, making the algorithm converge to local optimal solution. It is clear that an ACO algorithm can be applied to any combinatorial problem as far as it is possible to define:
(i) Problem representation which allows ants to incrementally build/ modify solutions.
(ii)The heuristic desirability η of edges.
(iii) A constraint satisfaction method which forces the construction of feasible solutions.
(iv)A pheromone updating rule which specifies how to modify pheromone trail τ on the edges of the graph.
(i) A probabilistic transition rule of the heuristic desirability and of pheromone trail.
In this section, cloud task scheduling based ACO algorithm will be proposed. Decreasing the makespan of tasks is the basic ideas from the proposed method.
1. Problem Representation: The problem is represented as a graph G= (N, E) where the set of nodes N represents the VMs and tasks and the set of edges E the connections between the task and VM as shown in Figure 1. All ants are placed at the starting VMs randomly. During an iteration ants build solutions to the cloud scheduling problem by moving from one VM to another for next task until they complete a tour (all tasks have been allocated). Iterations are indexed by t, 1< t< tmax, where tmax is the maximum number of iterations allowed.
CONCLUSION
Cloud computing is the most network-centric compute paradigm to date. A successful transition to Cloud will depend on a rock-solid network foundation that enables organizations to transition to the cloud at their own pace. We propose a cloud computing network and storage system. It does this operation in order to perform efficient resource utilization and load balancing of the servers. The future work of this proposed system could to modify the system performance by reducing the number servers present in the network using ACO algorithm for achieving cloud computing tasks scheduling has been presented. Firstly, the best values of parameters for ACO algorithm, experimentally determined. Then, the ACO algorithm in applications with the number of tasks varying from 100 to 1000 evaluated. Simulation results demonstrate that ACO algorithm outperforms FCFS and RR algorithms. In future work the effect of precedence between tasks and load balancing will be considered.