05-03-2013, 12:10 PM
Managing the PHR’s Patients
Managing the PHR’s.docx (Size: 16.58 KB / Downloads: 18)
Abstract
Managing the PHR is very important as the miss handling of the health records and unauthorized access to it may lead to the severe health problems and sometimes leads to the patient’s death
Existing system of managing the PHR’s patients
Because of high costs in maintaining the datacenters that are specially designed for the storing of PHR, it is outsourced/provided to the 3rd party service provider, which could lead to the security and privacy risks
Proposed systems
In this paper, we propose a patient centric framework and a suite of mechanisms for data access control to PHR stored in semi trusted servers.
Fine Grained access control is enforced, which means different users are authorized to read different sets of documents. The key idea is to divide the system into multiple security domains (PSD’s and PUD’s)
Security and privacy risks can be managed by encrypting the PHR’s before outsourcing it. ABE is used to encrypt the patient PHR file.
Related Work:
In this section, we first review related works addressing the privacy and security issues in the cloud.
Cloud computing has raised a range of important privacy and security issues [2], [3], [4]. Such issues are due to the fact that, in the cloud, users’ data and applications reside—at least for a certain amount of time—on the cloud cluster which is owned and maintained by a third party. Pearson et al. have proposed accountability mechanisms to address privacy concerns of end users [4] and then develop a privacy manager [5]. Their basic idea is that the user’s private data are sent to the cloud in an encrypted form, and the processing is done on the encrypted data. The output of the processing is deobfuscated by the privacy manager to reveal the correct result. However, the privacy manager provides only limited features in that it does not guarantee protection once the data are being disclosed.
The only work proposing a distributed approach to accountability is from Lee and colleagues [6]. The authors have proposed an agent-based system specific to grid computing. Distributed jobs, along with the resource consumption at local machines are tracked by static software agents. The notion of accountability policies in [6] is related to this CIA framework, but it is mainly focused on resource consumption and on tracking of sub jobs processed at multiple computing nodes, rather than access control.
In addition, this CIA framework may look similar to works on secure data provenance [7], [8], but in fact greatly differs from them in terms of goals, techniques, and application domains.
Problem Statement:
Consider the following illustrative example which serves as the basis of our problem statement.
Example: Alice, a professional photographer, plans to sell her photographs by using the SkyHigh Cloud Services. For her business in the cloud, she has the following requirements:
• Her photographs are downloaded only by users who have paid for her services.
• Potential buyers are allowed to view her pictures first before they make the payment to obtain the download right.
• Due to the nature of some of her works, only users from certain countries can view or download some sets of photographs.
• For some of her works, users are allowed to only view them for a limited time, so that the users cannot reproduce her work easily.
• In case any dispute arises with a client, she wants to have all the access information of that client.
• She wants to ensure that the cloud service providers of SkyHigh do not share her data with other service providers, so that the accountability is provided.
With the above scenario in mind, we identify the common requirements and develop several guidelines to achieve data accountability in the cloud. A user who subscribed to a certain cloud service, usually needs to send his/her data as well as associated access control policies (if any) to the service provider. After the data are received by the cloud service provider, the service provider will have granted access rights, such as read, write, and copy, on the data. Using conventional access control mechanisms, once the access rights are granted, the data will be fully available at the service provider.
In order to track the actual usage of the data, we aim to develop novel logging and auditing techniques which satisfy the following requirements:
• The logging should be decentralized in order to adapt to the dynamic nature of the cloud.
• Every access to the user’s data should be correctly and automatically logged.
• Log files should be reliable and tamper proof to avoid illegal insertion, deletion, and modification by malicious parties.
• Log files should be sent back to their data owners periodically to inform them of the current usage of their data.
Overview:
Here we discuss the overview of the Cloud Information Accountability framework. The Cloud Information Accountability framework conducts automated logging and distributed auditing of relevant access performed by any entity, carried out at any point of time at any cloud service provider.
Major Components:
There are two major components of the CIA, the first being the logger, and the second being the log harmonizer. The logger is the component which is strongly coupled with the user’s data, so that it is downloaded when the data are accessed, and is copied whenever the data are copied. It handles a particular instance or copy of the user’s data and is responsible for logging access to that instance or copy. The log harmonizer forms the central component which allows the user access to the log files.
The main tasks of logger include automatically logging access to data items that it contains, encrypting the log record using the public key of the content owner, and periodically sending them to the log harmonizer. It may also be configured to ensure that access and usage control policies associated with the data are honoured. The logger requires only minimal support from the server (e.g., a valid Java virtual machine installed) in order to be deployed. The tight coupling between data and logger, results in a highly distributed logging system, therefore meeting our first design requirement. The logger is also responsible for generating the error correction information for each log record and sends the same to the log harmonizer. The error correction information combined with the encryption and authentication mechanism provides a robust and reliable recovery mechanism, therefore meeting the third requirement.
The log harmonizer is responsible for auditing. Being the trusted component, the log harmonizer generates the master key. It holds on to the decryption key for the IBE key pair, as it is responsible for decrypting the logs. Alternatively, the decryption can be carried out on the client end if the path between the log harmonizer and the client is not trusted. In this case, the harmonizer sends the key to the client in a secure key exchange. It supports two auditing strategies: push and pull. Under the push strategy, the log file is pushed back to the data owner periodically in an automated fashion. The pull mode is an on-demand approach, whereby the log file is obtained by the data owner as often as requested. These two modes allow us to satisfy the a fore mentioned fourth design requirement. The log harmonizer is also responsible for handling log file corruption. In addition, the log harmonizer can itself carry out logging in addition to auditing. Separating the logging and auditing functions improves the performance. The logger and the log harmonizer are both implemented as lightweight and portable JAR files. The JAR file implementation provides automatic logging functions, which meets the second design requirement.