21-01-2013, 10:16 AM
EFFICIENT AUDIT SERVICE OUTSOURCING FOR DATA INTEGRITY IN CLOUDS
1EFFICIENT AUDIT SERVICE.docx (Size: 923.78 KB / Downloads: 139)
Abstract
Cloud-based outsourced storage relieves the client’s burden for storage management and maintenance by providing a comparably low-cost, scalable, location-independent platform. However, the fact that clients no longer have physical possession of data indicates that they are facing a potentially formidable risk for missing or corrupted data. To avoid the security risks, audit services are critical to ensure the integrity and availability of outsourced data and to achieve digital forensics and credibility on cloud computing. Provable data possession (PDP), which is a cryptographic technique for verifying the integrity of data without retrieving it at an untrusted server, can be used to realize audit services. In this paper, profiting from the interactive zero-knowledge proof system, we address the construction of an interactive PDP protocol to prevent the fraudulence of prover (soundness property) and the leakage of verified data (zero-knowledge property). We prove that our construction holds these properties based on the computation Diffie–Hellman assumption and the rewindable black-box knowledge extractor. We also propose an efficient mechanism with respect to probabilistic queries and periodic verification to reduce the audit costs per verification and implement abnormal detection timely. In addition, we present an efficient method for selecting an optimal parameter value to minimize computational overheads of cloud audit services. Our experimental results demonstrate the effectiveness of our approach.
Introduction
In recent years, the emerging cloud-computing paradigm is rapidly gaining momentum as an alternative to traditional information technology. Cloud computing provides a scalability environment for growing amounts of data and processes that work on various applications and services by means of on-demand self-services. One fundamental aspect of this paradigm shifting is that data are being centralized and outsourced into clouds. This kind of outsourced storage services in clouds have become a new profit growth point by providing a comparably low-cost, scalable, location-independent platform for managing clients’ data. The cloud storage service (CSS) relieves the burden of storage management and maintenance. However, if such an important service is vulnerable to attacks or failures, it would bring irretrievable losses to users since their data or archives are stored into an uncertain storage pool outside the enterprises. These security risks come from the following reasons: the cloud infrastructures are much more powerful and reliable than personal computing devices. However, they are still susceptible to security threats both from outside and inside the cloud; for the benefits of their possession, there exist various motivations for cloud service providers (CSP) to behave unfaithfully toward the cloud users; furthermore, the dispute occasionally suffers from the lack of trust on CSP. Consequently, their behaviors may not be known by the cloud users, even if this dispute may result from the users’ own improper operations. Therefore, it is necessary for cloud service providers to offer an efficient audit service to check the integrity and availability of the stored data.
Arithmetic Operators for Pairing-Based Cryptography
Since their introduction in constructive cryptographic applications, pairings over (hyper)elliptic curves are at the heart of an ever increasing number of protocols. Software implementations being rather slow, the study of hardware architectures became an active esearch area. In this paper, we first study an accelerator for the _T pairing over F3[x]/(x97 + x12 + 2). Our architecture is based on a unified arithmetic operator which performs addition, multiplication, and cubing over F397 . This design methodology allows us to design a compact coprocessor (1888 slices on a Virtex-II Pro 4 FPGA) which compares favorably with other solutions described in the open literature. We then describe ways to extend our approach to any characteristic and any extension field.
Efficiency and Security
The proposed PDP scheme relies only on efficient symmetrickey operations in both setup (performed once) and verification phases. However, our scheme is more efficient than POR as it requires no bulk encryption of outsourced data and no data expansion due to additional sentinel blocks. Our scheme provides probabilistic assurance of the untampered data being stored on the server with the exact probability fixed at setup. Furthermore, our scheme is provably secure in the random oracle model (ROM).
Dynamic Data Support
Unlike both prior results the new scheme supports secure and efficient dynamic operations on outsourced data blocks, including: modification, deletion and append. 1 Supporting such operations is an important step toward practicality, since many envisaged application are not limited to data warehousing, i.e., dynamic operations need to be provided.
Batching Updates and Deletions
Although the above demonstrates that the proposed technique can accommodate certain operations on outsourced data, it is clear that the cost of updating all remaining verification tokens for a single block update or deletion is not negligible for OWN . Fortunately, it is both easy and natural to batch multiple operations. Specifically, any number of block updates and deletes can be performed at the cost of a single update or delete. To do this, we need to modify the for-loop in Algorithm 3 to take care of both deletions and updates at the same time. Since the modifications are straight-forward, we do not illustrate them in a separate algorithm.
Multiple Files
We have described PDP schemes for the case when a client stores a single file F on the server. In the TagBlock algorithm, for each block mi the client computes a tag over the tuple (Wi,mi). We emphasize that the values Wi cannot be reused. Since Wi is obtain d by concatenating a secret value v with i, this implies that the indices i must be different across all tags. In other words, the client should not use the same index twice for computing tags. This condition holds because in our scheme an index i is simply the position of the block mi in the file. In order to store multiple files on the server, the client must ensure that indices used to compute tags are distinct not only across the tags corresponding to the blocks of each file, but also across the tags corresponding to the blocks of all files. One simple method to achieve this is to prepend the file’s identifier to the index. For example, if the identifier of a file F = (m1, . . . ,mn) is given by id(F), then for each block mi, 1 ≤ i ≤ n, C computes the tag (Tid(F)|i,mi , Wid(F)|i) ← TagBlock(pk, sk,mi, id(F)|i). The uniqueness of indices is ensured under the assumption that each file has a unique identifier. Another simple way to ensure that indices are only used once is to use a global counter for the index, which is incremented by the client each time after a tag is computed
Conclusions
In this paper, we addressed the construction of an efficient audit service for data integrity in clouds. Profiting from the standard interactive proof system, we proposed an interactive audit protocol to implement the audit service based on a third party auditor. In this audit service, the third party auditor, known as an agent of data owners, can issue a periodic verification to monitor the change of outsourced data by providing an optimized schedule. To realize the audit model, we only need to maintain the security of the third party auditor and deploy a lightweight daemon to execute the verification protocol. Hence, our technology can be easily adopted in a cloud computing environment to replace the traditional Hash-based solution. More importantly, we proposed and quantified a new audit approach based on probabilistic queries and periodic verification, as well as an optimization method of parameters of cloud audit services. This approach greatly reduces the workload on the storage servers, while still achieves the detection of servers’ misbehavior with a high probability. Our experiments clearly showed that our approach could minimize computation and communication overheads.