29-08-2016, 03:03 PM
1451473719-DesignAugmentedEmbeddedSystemfor2D3DVisualizationinCardiacSurgery.docx (Size: 500.06 KB / Downloads: 5)
Abstract— This paper aims to be an outlet for speeding up workflow of surgery and be helpful for surgeons to simply coordinate transformations between several imaging displays. Visualizing 2D and 3D information within Cardiac Surgery is challenging. Restricted visualization in the surgical field is one of the most acute challenges for minimally invasive surgery (MIS). Due to the low image contrast and high tortuosity of the Angiographic images in cardiac surgery, it is difficult for even experienced radiologists to understand that images. They can thereby hardly meet the requirements of high resolution and 3D visualization of the surgical scene to support the understanding of anatomic structures for safe MIS procedures. Multiple monitors are required in the operating room for the visualization of data. Resent day’s radiologists are specially trained to understand such images. But the drawback is they need to surmise and many a times, the guessing can be wrong. We solve this problem by using a 3d augmented visualization. Image volume rendering technique is used for the augmented reality. This is more comfortable for the radiologist to comprehend. The main intention is to develop a complete angiographic visualization system that displays 2D X-ray and 3D anatomical information in a single monitor for the surgeon in the operating room. Based on this concept, we present a flexible method for highlighting and enhancing angiographic images.
Keywords— Augmented Reality, Visualization, Angiography
I. INTRODUCTION
THE recent past decades witnessed a rapid augmentation of minimally invasive surgery (MIS) procedures and Biopsies such as single-port laparoscopy and natural orifice translumenal endoscopic surgery (NOTES). Medical imaging has played important role in the guidance of minimally invasive surgery (MIS) procedures to augment the surgeons’ spatial orientation and aid with the identification of critical anatomy. However, current image guided systems still confront various challenges, including, most importantly, the restricted visualization of the surgical field. Preoperative magnetic resonance imaging (MRI) or computed tomography (CT) scans can provide critical information. Nevertheless, they cannot solve the problem of restricted visualization during MIS [1].
Resolving the spatial arrangement of complex three-dimensional structures in an image can be a difficult task. Mostly renderings of volume data acquired by imaging modalities such as Preoperative magnetic resonance imaging (MRI), computed tomography CT, or X-Ray often suffer from this problem. As such data frequently contain many angle, bending, fines, and overlapping structures. These images often look confusing and are difficult to interpret without additional cues. In surgeries included open and minimal invasive surgeries (MIS), the doctors usually use personal experience to do the surgical operation. The general rules for surgeons to learn
the surgical skills are to spend several years for aside observation from the senior surgeons. The major reason of the surgery need so much time to learn the skills is the system coordinate transforms
In the past, volume data generated by Preoperative magnetic resonance imaging (MRI), computed tomography CT, or X-Ray could only be viewed as individual slices. This resulted in zealous intellectual tasks for clinicians to fully appreciate the spatial layout [5]. Volume visualization has been continuously challenging in the computer graphics community. Depth perception is an important facet in volume rendering. Bruckner and Gröller introduce volumetric halos in the volume rendering pipeline to facilitate depth perception. The halos are known as the darkened or brightened regions surrounding the edges of specific structures. The added halos help judge the spatial relationships more accurately [6].
The order-independent volume rendering techniques, such as MIP (maximum intensity projection) and DRR (digital reconstructed radiograph) rendering typically have poor depth perception. Mora and Ebert introduced the order-Independent volume rendering (OIVR) techniques to improve MIP and X-ray using gradient signals inside stereo images. Instead of using only the voxel data to produce the MIP or DRRs, the gradient magnitude was used in the rendering to convey additional information [7]. Kersten et al. used purely absorptive lighting model in DRR rendering to enhance the depth perception of the synthetic X-ray images [8]. Lastly, Wieczorek et al. recently introduced an interactive X-Ray perceptual visualization (IXPV) technique to improve 3D perception in X-Ray. With pre-operative data, it allows the user to interactively manipulate the X-ray image by varying depth [9]. However, none of these techniques visualize simultaneously in X-ray the overlay and the complete 3D anatomy of interest. For that I have developed an image processing system to covert the image to it’s a visualize form.
II. DESIGN METHODOLOGY
We divided our work in to three parts pre-processing of image, depth estimation and visualize image. In first part prepossessing of image we use two steps image enhancement and edge detection. In second part we use a depth estimation feature.
A. pre-processing of image
In that we use a sobel operator for edge detection. When we convert a 2D angiographic image to it visualized form it is use full. This operator uses a two kernel both are 3*3. This operator is convolved with the original image to find approximation of derivative. We use two operators Cx and Cy which are used for horizontal and vertical change respectively.
By using above operator we find out a edges of the images.
B. Depth estimation
To convert any image to its visualized form depth estimation is more important. We divided an image in to small parts and estimate a value to each part.
We are basically used a vector feature for finding depth estimation. It is divided in to a two parts absolute and relative type. Absolute feature is basically used for the finding an absolute depth of a particular part. Relative feature is basically used for the finding a relative depth. We decided that feature basically on the parameter these are texture variation, haze and texture gradient. In texture variation the data is relatively dependent the image intensity channel. For that we apply a low mask to calculate texture energy. Hazes are reflected in a low frequency data in colour channel we calculate this by applying local averaging filter. To calculate a texture gradient we convolve an intensity channel with edge filter.
In texture variation to calculate texture energy measure uses a low’s masks. In these measures are computed by applying small convolution kernels to a digital image, and then performing a non-linear windowing operation. The 2-D convolution kernels used for texture discrimination and generated from the following set of one- dimensional convolution kernels of length three:
These mnemonics stand for Level - average grey level, Edge - extract edge features, Spot - extract spots,
The sum of all kernels is zero except L3. From these above one-dimensional convolution kernels, we can generate two-dimensional convolution kernels by convolving a vertical 1-D kernel with a horizontal 1-D kernel As an example, the S3L3 kernel is found by convolving a vertical S3 kernel with a horizontal L3 kernel. In matrix multiplication, this would be written as L3E3 = LT 3 × E3. All 3 × 3 masks which are used in analysis are shown in table I.