26-09-2016, 11:41 AM
Implementation of a Green Power Management Algorithm for Virtual Machines on Cloud Computing
1456160841-rmk.docx (Size: 1.44 MB / Downloads: 10)
Abstract.With the development of electronic of government and business, the implementation of these services are increasing the requirement for servers, each and every year considerable number of the procurement server and out of the server are too old to provide better service. However, due to the speed of the server out of nowhere near the rate of increase, the need and expansion of the server, on behalf of our requirement to provide more space, power, air conditioning, network, human and other infrastructure. Delivered for increased costs, long years, the often less than the purchase price of the server.And the provision of these services is actually quite energy-intensive, especially when the servers are running at low utilization, it makes the resources idle and waste, which is caused by the energy efficiency of data centers the main reason for the low. Even in a very low processors load, such as 10% CPU utilization, the total power consumption is more than 50% in the peak. Similarly, if the disk drives and networks, or any such resource is the bottleneck, it will increase the waste of other resources. The “Green” key term place a vital role or hot key recently. we aimed the topic and proposed virtualization technology concept to develop power management resources.
Introduction
Cloud is actually refers to network, the name came from engineers in the schematic drawing, often represented by a cloud network. Therefore cloud services are the networks,
Virtual machines are separated into two major categories, based on their use and degree of correspondence to any real machine. A system virtual machine provides a complete system platform which supports the execution of a complete operating system (OS). In contrast, a process virtual machine is designed to run a single program, which means that it supports a single process. An essential characteristic of a virtual machine is that the software running inside is limited to the resources and abstractions provided by the virtual machine—it cannot break out of its virtual world.
Cloud and Virtualization to accelerate not only accelerate the data center building, but also brings the possibility of green energy. When the data center application based on a generic virtual machine, allowing the application workload combined in a smaller number of virtual machines, which can help more efficient use of resources. If workload size could allocated in different resource depend time and space, it could improve the energy efficiency and avoiding waste resource
In this paper, the Green Free Power Management (GFPM) for loading balance approach was proposed by our thesis and in this thesis it including three main phrases: (1) The Virtualization mechanism (2) The Dynamic Resource Allocation approach (3) The Green free Power Management scheme. This paper developed as follows. In Section 2, we introduce the background and related works, section 3, describes the about the total static of system and their designs, in section 4,Detail about the experiment and results, and finally section 5, outlines of future works of a project and main conclusion.
2 Background and Related Works
2.1 Energy consolidation Awarness
Energy consumption in hosting Internet services is becoming a pressing issue as these services scale up. Dynamic server provisioning techniques are effective in turning off unnecessary servers to save energy. Such techniques, mostly studied for request-response services, face challenges in the context of connection servers that host a large number of long-lived TCP connections. In this paper, we show that our algorithms can save a significant amount of energy without sacrificing user experiences. Consolidation of applications in cloud computing environments presents a significant opportunity for energy optimization. The goal of energy aware consolidation is to keep servers well utilized such that the idle power costs are efficiently amortized but without taking an energy penalty due to internal contentions.
2.2 Virtualization
Virtualization is simply the logical separation of the request for some service from the physical resources that actually provide that service. Virtualization, focusing on logical operating environments rather than physical ones, makes applications, services, and instances of an operating system portable across different physical computer systems. Virtualization can execute applications under many operating systems, manage IT more OS.
In general, most virtualization strategies fall into one of two major categories:
Full virtualization (also called native virtualization) is similar to emulation. As in emulation, unmodified operating systems and applications run inside a virtual machine. Full virtualization differs from emulation in that perating systems and applications are designed to run on the same architecture as the underlying physical machine. This allows a full virtualization system to run many instructions directly on the raw hardware.
3 Open Nebula
The Open Nebula is a virtual infrastructure engine that enables the dynamic deployment and re-allocation of virtual machines in a pool of physical resources. The Open Nebula system extends the benefits of virtualization platforms from a single physical resource to a pool of resources, decoupling the server, not only from the physical infrastructure but also from the physical location [4]. The Open Nebula contains one frontend and multiple backend. The front-end provides users with access interfaces and management functions. The back-ends are installed on Xen servers, where Xen hypervisors are started and virtual machines could be backed. Communications between frontend and backend employ SSH. The Open Nebula gives users a single access point to deploy virtual machines on a locally distributed infrastructure.
Open Nebula orchestrates storage, network, virtualization, monitoring, and security technologies to enable the dynamic placement of multi-tier services (groups of interconnected virtual machines) on distributed infrastructures, combining both data center resources and remote cloud resources, according to allocation policies [4]. The architecture of Open Nebula can be described as Figure 3.
Live migration is the movement of a virtual machine from one physical host to another while continuously powered-up. When properly carried out, this process takes place without any noticeable effect from the point of view of the end user. Live migration allows an administrator to take a virtual machine offline for maintenance or upgrading without subjecting the system's users to downtime. When resources are virtualized, additional management of VMs is needed to create, terminate, clone or move VMs from host to host. Migration of VMs can be done off-line (the guest in the VM is powered off) or on-line (live migration of a running VM to another host).
One of the most significant advantages of live migration is the fact that it facilitates proactive maintenance. If an imminent failure is suspected, the potential problem can be resolved before disruption of service occurs. Live migration can also be used for load balancing, in which work is shared among computers in order to optimize the utilization of available CPU resources.
However the Open Nebula lacks a GUI management tool. In previous works, we built virtual machines on Open Nebula and implemented Web-based management tool. Thus,
285
the system administrator can be easy to monitor and manage the entire Open Nebula System on our project. Open Nebula is composed of three main components: (1)the Open Nebula Core is a centralized component that manages the life cycle of a VM by performing basic VM operations, and also provides a basic management and monitoring interface for the physical hosts (2) the Capacity Manager governs the functionality provided by the Open Nebula core. The capacity manager adjusts the placement of VMs based on a set of pre-defined policies (3) Virtualizer Access Drivers. In order to provide an abstraction of the underlying virtualization layer, Open Nebula uses pluggable drivers that expose the basic functionality of the hypervisor [5].
Dynamic Resource Allocation and Green free power management
The Dynamic Resource Allocation (DRA) algorithm [1] focuses on enhancing Hadoop HA architecture problem. However, the purpose of DRA is to reach the best balance between each physical machine. To avoid computing resources centralized on some specify physical machines, how to balance the resources is most important issue. To achieve the maximum efficiency the resource must be evenly distributed.DRA manages the allocation of resources to a set of virtual machines running on a cluster hosts with the goal of fair and effective use of resources. Ones makes virtual machine placement and migration recommendations that serve to enforce resource-based service level agreements, user-specified constraints, and maintain load balance across the cluster even as workloads change.GFPM saves power by dynamically right-sizing cluster capacity according to workload demands. Ones recommend the evacuation and powering of hosts when CPU is lightly utilized. GFPM recommends powering hosts back on when either CPU utilization increases appropriately or additional host resources are needed to meet user-specified constraints. GPM executes DRA in a what-if mode to ensure its host power recommendations are consistent with the cluster constraints and objectives being managed by DRA.Hosts powered of by GFPM are marked in standby mode,
3.2 Green Power Mechanism Algorithm
First, DRA defines an ideal ratio. The ones should be equal to the average loading that summary of virtual machine loading divided by the booted hosts. Next, using average loading compare each loading of virtual machine on hosts. Finally, migrate the higher loading of virtual machine to the lower ones. DRA can be regarded as a virtual machine load balance. GFPM algorithm archives energy saving which based on the load balance. GFPM algorithm is as follow:
: Sum of HOSTi loading ratio, there HOSTi is available for allocation, calculation
is as follows:
∑
∑
λ: maximum tolerance ratio of loading. β: minimum critical ratio of loading.
: Among of available host for allocation, ones CPU usage is the minimum. calculation is as follows:
min ∑
Suppose there are n virtual machines and is greater than the , it shows
∑ λ
the loading on physical machine is too much, and then GPM will awake a new a host and
Load
apply the DRA to do load balancing. If the is small then β.It expressed resource
Load
utilization in most of the time is idle state. So it needs to be turn off the one of the booted hosts. GFPM mechanism will decide which one should be shut down. Once target host have determined. The virtual machines on target ones would migrate averagely to the others host, then shut down the target host to attain the purpose of energy saving.
2.6 Related Works
In power management area, Z. Wu and J. Wang presented a control framework of tree distribution for power management in cloud computing so that power budget can be better managed based on workload or service types [27].
Additionally, R. S. Montero [7] proposes a performance model to characterize these variable capacity (elastic) cluster environments. The model can be used to dynamically dimension the cluster using cloud resources, according to a fixed budget, or to estimate the cost of completing a given workload in a target time.
This paper focuses power management allocation on physical machines with virtual machines. And we will present a green power management mechanism in the following section.
3 System Design
3.1 Architecture System
Besides managing individual VMs’ life cycle, we also designed the core to support services deployment; such services typically include a set of interrelated components (for example, a Web server and database back end) requiring several VMs. Thus, we can treat a group of related VMs as a first-class entity in Open Nebula. Besides managing the VMs as a unit, the core also handles the delivery of context information (such as the Web server’s IP address, digital certificates, and software licenses) to the VMs.
3 Interface Management
We design a useful web interface for end users fastest and friendly to Implementation virtualization environment. In Figure 6, it shows the authorization mechanism, through the core of the web-based management tool, it can control and manage physical machine and VM life-cycle. The entire web-based management tool including physical machine management, virtual machine management and performance monitor. In Figure 7 it can set the VM attributes such as memory size, IP address, root password and VM name etc…, it also including the life migrating function. Life migration means VM can move to any working physic machine without suspend in-service programs. Life Migration is one of the advantages of Open Nebula. Therefore we could migrate any VM what we want under any situation, thus, we have a DRA mechanism to make the migration function more meaningful
RRDtool is the Open Source industry standard, high performance data logging and graphing system for time series data.