Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: IMAGE CLASSIFICATION USING SEGMENTATION REPORT
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
IMAGE CLASSIFICATION USING SEGMENTATION

[attachment=54698]

INTRODUCTION

The traditional processing flow of segmentation followed by classification in computer vision assumes that the segmentation is able to successfully extract the object of interest from the background image. It is extremely difficult to obtain a reliable segmentation without any prior knowledge about the object that is being extracted from the scene. This is further complicated by the lack of any clearly defined metrics for evaluating the quality of segmentation or for comparing segmentation algorithms. We propose a method of segmentation that addresses both of these issues, by using the object classification subsystem as an integral part of the segmentation. This will provide contextual information regarding the objects to be segmented, as well as allow us to use the probability of correct classification as a metric to determine the quality of the segmentation. We view traditional segmentation as a filter operating on the image that is independent of the classifier, much like the filter methods for feature selection. We propose a new paradigm for segmentation and classification that follows the wrapper methods of feature selection. Our method wraps the segmentation and classification together, and uses the classification accuracy as the metric to determine the best segmentation. By using shape as the classification feature, we are able to develop a segmentation algorithm that relaxes the requirement that the object of interest to be segmented must be homogeneous in some low-level image parameter, such as texture, color, or grayscale. This represents an improvement over other segmentation methods that have used classification information only to modify the segmenter parameters, since these algorithms still require an underlying homogeneity in some parameter space. Rather than considering our method as, yet, another segmentation algorithm, we propose that our wrapper method can be considered as an image segmentation framework, within which existing image segmentation algorithms may be executed.

IMAGE CLASSIFICATION

Classification is the process of connecting the classes in an image map with the image objects in a scene. After the process of feature definition, we adopt nearest neighbour classifier to assign each image object to a certain classes. Nearest neighbour classifies image objects in a given feature space and with given sample for the class. The principle of nearest neighbour classifier is simple: First, we identify samples, typical representatives for each class. Then the algorithm looks for the closest sample object in the feature space for each image object. If the result of classification contains many misclassifications, we can identify samples and classify again.

RELATED WORKS

There is an abundance of literature on image segmentation, and a number of review articles highlighting them. Methods also have been defined for post-processing the low-level segmentation to further regularize the segmentation output, such as Markov Random Fields. Most recently, the data-driven Markov chain Monte Carlo (DDMCMC) algorithm for image segmentation has drawn considerable attention due to its ability to integrate texture, colour, and edge information in an optimal manner to devise a robust labelling of the image into homogeneous regions. These methods still rely on the assumption that the pixels belonging to the object of interest share a common set of low-level image attributes, thereby allowing the object to be extracted as a single entity. If an object is composed of multiple regions of differing texture or colour then the object is divided into regions corresponding to each of these, and these sub-regions must then be re-assembled through some contextual-based post-processing to segment the complete object from the image. By employing an additional constraint upon the segmentation that encourages it to find a human, it would be possible to only extract the regions corresponding to the human in the image. This additional constraint can be provided through information regarding the desired shape of the final retained region. Borenstein and Ullman have developed a method for providing the class context to the segmentation via the use of known object “fragments”. They provide a set of standard image regions, called fragments, which are then compared with the incoming image to determine if there are regions of the image that match these components. They liken the process to building a puzzle from the top-down based on these known fragments. The fragments are found using a correlation detector approach that uses a predefined set of fragments of the object of interest. Our wrapper approach uses the image classification in a more global scheme, where the image classification is used to assemble regions derived from the traditional segmentation algorithms, and then further directs these algorithms to modify their segmentation parameters if the match is inadequate. Our algorithm builds the desired object from the regions provided in the image rather than a priori defined representative sub-images that are searched for in the image.

ROBLEM IDENTIFICATION AND DEFINING THE OBJECTIVE

The traditional processing flow for image-based pattern recognition consists of image segmentation followed by classification. This approach assumes that the segmentation is able to accurately extract the object of interest from the background image autonomously. Note that rather than merely providing a labelling of all regions in the image, the segmentation process must extract the object of interest from the background to support the subsequent feature extraction and object classification processing. The performance of this subsequent processing is strongly dependent on the quality of this initial segmentation. This expectation of ideal segmentation is rather unrealistic in the absence of any contextual information on what object is being extracted from the scene. There are three limitations of existing image segmentation algorithms.

OVERVIEW OF THE APPROACH

We derive the motivation for our approach from the domain of feature selection in pattern recognition, where there are two¬ common mechanisms for selection, namely the filer method and the wrapper method. Filter methods analyse features independently of the classifier and use some goodness metric to decide which features should be kept. Wrapper methods, on the other hand, use a specific classifier, and its resultant probability of error, to select the features. Hence, in the wrapper method, the feature selection algorithm is wrapped inside the classifier.
As we mentioned earlier, one way to view existing image segmentation algorithms is that they are filters operating on the image. They are independent of the subsequent object recognition methods, much like the filter methods for feature selection are independent of the classification method. Consequently, we propose a new paradigm for object segmentation that follows the wrapper methods of feature selection, where we wrap the segmentation and the classification together, and use the classifier as the metric for selecting the best segmentation. Rather than considering our method as yet another segmentation algorithm, we propose that our wrapper method can be considered as an image segmentation framework, within which existing image segmentation algorithms may be executed.

METHODOLOGY

There are five stages in our proposed wrapper-based segmentation processing as shown in Fig. 4.1. For our wrapper-based segmentation a complete set of features must be calculated for every blob combination. Note, however, that many combinations of blobs share common blob subsets, which means that some of the feature extraction calculations may be repeated many times. By placing the feature extraction ahead of the blob combining, we can pre-compute the features for each blob and then, if we choose features that are linear functions of the image pixels, we simply sum the feature vectors for all the blobs in each particular blob combination. This provides a significant speed up, making real-time operation feasible.

REGION LABELING

The purpose of region labeling is to group together regions with common attributes (e.g., grayscale, texture, etc.) into contiguous regions. There are many mechanisms proposed for defining the “common characteristics,” such as expectation maximization (EM), normalized cuts, relaxation methods, region growing methods, and split-and-merge methods which provides a framework for unifying many of these approaches. The output of all of these methods is a labeling of the incoming image into a small number of regions, which are often called “blobs”. The selection of the region labeling algorithm was based on its relative ease of use, its flexibility, and its suitability for real-time operation. Consequently, we have chosen to use the EM algorithm to fit a mixture of Gaussians that best matches the histogram of the grayscale values in the preliminary segmentation. The EM algorithm is attractive because it can easily be extended to use multiple features, such as texture, as well as grayscale, for other applications. While the DDMCMC algorithm has been shown to be a unifying framework for region and edge-based methods, it has also been defined as a process that runs continuously, until the best candidate segmentation is found. This open-ended nature of the algorithm can make it difficult to adapt to a real-time embedded implementation.

CLASSIFICATION OF BLOB COMBINATIONS

Each blob combination must be classified in this wrapper method, since we use the classification distance as the metric for selecting the best segmentation. This is accomplished by computing the classification accuracy for the current blob combination, given each of the pattern classes. A sequence of classification accuracies is, thus, generated, as blobs are added for each class. The class with the overall best classification accuracy is considered to be the correct class, and the correct segmentation corresponds to the point in the blob accrual sequence where this best classification occurs. To accelerate the algorithm, we start with the five blobs that provided the least individual classification distances, rather than starting with only a single blob.

CLASSIFICATION OF BLOB COMBINATIONS

Each blob combination must be classified in this wrapper method, since we use the classification distance as the metric for selecting the best segmentation. This is accomplished by computing the classification accuracy for the current blob combination, given each of the pattern classes. A sequence of classification accuracies is, thus, generated, as blobs are added for each class. The class with the overall best classification accuracy is considered to be the correct class, and the correct segmentation corresponds to the point in the blob accrual sequence where this best classification occurs. To accelerate the algorithm, we start with the five blobs that provided the least individual classification distances, rather than starting with only a single blob.