04-10-2012, 12:44 PM
Going Back and Forth: Efficient Multi deployment and Multi snapshotting on Clouds
Going Back and Forth.doc (Size: 359.5 KB / Downloads: 36)
ABSTRACT
In this project we deal fully about the Security which has become one of the major issues for data communication over wired and wireless networks. Different from the past work on the designs of cryptography algorithms and system infrastructures, we will propose a dynamic routing algorithm that could randomize delivery paths for data transmission. The algorithm is easy to implement and compatible with popular routing protocols, such as the Routing Information Protocol in wired networks and Destination-Sequenced Distance Vector protocol in wireless networks, without introducing extra control messages. An analytic study on the proposed algorithm is presented, and a series of simulation experiments are conducted to verify the analytic results and to show the capability of the proposed algorithm. In the past decades, various security-enhanced measures have been proposed to improve the security of data transmission over public networks. Existing work on security-enhanced data transmission includes the designs of cryptography algorithms and system infrastructures and security-enhanced routing methods. The main objective of the project is to propose a dynamic routing algorithm to improve the security of data transmission.
INTRODUCTION:
The Infrastructure as a Service cloud computing has emerged as a viable alternative to the acquisition and management of physical resources. With IaaS, users can lease storage and computation time from large datacenters. Leasing of computation time is accomplished by allowing users to deploy virtual machines (VMs) on the datacenter’s resources. Since the user has complete control over the configuration of the VMs using on-demand deployments, IaaS leasing is equivalent to purchasing dedicated hardware but without the long-term commitment and cost. The on-demand nature of IaaS is critical to making such leases attractive, since it enables users to expand or shrink their resources according to their computational needs, by using external resources to complement their local resource base.
This problem is particularly acute for VM images used in scientific computing where image sizes are large. A typical deployment consists of hundreds or even thousands of such images. Conventional deployment techniques broadcast the images to the nodes before starting the VM instances, a process that can take tens of minutes to hours, not counting the time to boot the operating system itself.
EXISTING SYSTEM
The huge computational potential offered by large distributed systems is hindered by poor data sharing scalability.
We addressed several major requirements related to these challenges. One such requirement is the need to efficiently cope with massive unstructured data (organized as huge sequences of bytes - BLOBs that can grow to TB) in very large-scale distributed systems while maintaining a very high data throughput for highly concurrent, fine-grain data accesses.
The role of virtualization in Clouds is also emphasized by identifying it as a key component. Moreover, Clouds have been defined just as virtualized hardware and software plus the previous monitoring and provisioning technologies.
Cloud Computing is a “buzz word” around a wide variety of aspects such as deployment, load balancing, provisioning, and data and processing outsourcing.
PROPOSED SYSTEM
A distributed virtual file system specifically optimized for both the multi deployment and multi snapshotting patterns. Since the patterns are complementary, we investigate them in conjunction. Our proposal offers a good balance between performance, storage space, and network traffic consumption, while handling snapshotting transparently and exposing standalone, raw image files (understood by most hypervisors) to the outside.
We introduce a series of design principles that optimize multi deployment and multi snapshotting patterns and describe how our design can be integrated with IaaS infrastructures.
We show how to realize these design principles by building a virtual file system that leverages versioning-based distributed storage services. To illustrate this point, we describe an implementation on top of Blob Seer, a versioning storage service specifically designed for high throughput under concurrency.
JAVA VIRTUAL MACHINE (JVM)
Beyond the language, there is the java virtual machine. The java virtual machine is an important element of the java technology. The virtual machine can be embedded within a web browser or an operating system. Once a piece of java code is loaded onto a machine, it is verified. As part of the loading process, a class loader is invoked and the byte code verification makes sure that the code that has been generated by the compiler will not corrupt the machine it is loaded on. Byte code verification takes place at the end of the compilation process to make sure that it is accurate and correct.
Importance of Testing
Testing is difficult. It requires knowledge of the application and the system architecture. The majority of the preparation work is tedious. The test conditions, test data, and expected results are generally created manually. System testing is also one of the final activities before the system is released for production. There is always pressure to complete systems testing promptly to meet the deadline. Nevertheless, systems testing are important.
In mainframe when the system is distributed to multiple sites, any errors or omissions in the system will affect several groups of users. Any savings realized in downsizing the application will be negated by costs to correct software errors and reprocess information.
Systems Testing
The third level of testing includes systems testing. Systems testing verify that the system performs the business functions while meeting the specified performance requirements. It is performed by a team consisting of software technicians and users. It uses the Systems Requirements document, the System Architectural Design and Detailed Design Documents, and the Information Systems Department standards as its sources. Documentation is recorded and saved for systems testing.