Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: Generalized Biased Discriminant Analysis for Content-Based Image Retrieval pdf
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Generalized Biased Discriminant Analysis for Content-Based Image Retrieval

[attachment=52940]

Abstract

Biased Discriminant Analysis (BDA) is one of the
most promising Relevance Feedback (RF) approaches to deal with
the feedback samples imbalance problem for Content-Based
Image Retrieval (CBIR). However, the singular problem of the
positive within-class scatter and the Gaussian distribution
assumption for positive samples are two main obstacles impeding
the performance of the BDA RF for CBIR. To avoid both of these
intrinsic problems in BDA, in this paper, we propose a novel
algorithm called Generalized Biased Discriminant Analysis
(GBDA) for CBIR. The GBDA algorithm avoids the singular
problem by adopting the Differential Scatter Discriminant
Criterion (DSDC) and handles the Gaussian distribution
assumption by redesigning the between-class class scatter with a
nearest neighbor approach. To alleviate the overfitting problem,
GBDA integrates the locality preserving principle; therefore, a
smooth and locally consistent transform can also be learned.
Extensive experiments show that GBDA can substantially
outperform the original BDA, its variations and related Support
Vector Machine (SVM) based RF algorithms.

INTRODUCTION

elevance Feedback (RF) [1, 2] is one of the most
powerful tools to enhance the performance of a
Content-Based Image Retrieval (CBIR) system [3, 4]. Most of
the RF schemes involve the user into the search engine by
letting the user manually label semantically relevant and
irrelevant samples, which are positive and negative feedbacks
respectively for a query image.
Various RF methods have been developed based on different
assumptions for the positive and negative feedbacks during past
few years. One-class Support Vector Machine (SVM) estimates
the density of positive feedbacks but ignores the negative
feedbacks [5]. Two-class SVM can identify the positive and
negative feedbacks from each other but treats the two groups
equally [6]. In [7], Tao et al believe that positive feedbacks are
included in a set and negative feedbacks split into a small
number of subset and a series of kernel marginal convex
machines have been developed between one positive group and
several negative subgroups.

EXPERIMENTAL RESULTS

We have implemented an image retrieval system on a Corel
Image Database that includes 10763 images with 80 different
concepts [16, 34]. To represent images, we choose three types
of low-level visual features. For color, we utilize the color
histogram [35] to represent the color information. We
quantized hue and saturation into 8 bins and value into 4 bins.
We use Webber’s Law Descriptors [36] to represent the local
features of images, which result in a feature vector of 240
values. For shape, the edge directional histogram from the Y
component in YCrCb space is adopted to capture the spatial
distribution of edges [37]. Five categories including horizontal,
45○。 diagonal, vertical, 135○。 diagonal and isotropic directions are
calculated to form shape features. All of these features are
combined into a feature vector, which results in a vector with
510 values (i.e., 8*8*4+9+240+5=510). Then all feature
components are normalized to distributions with zero mean and
one standard deviation to represent images.

CONCLUSION

To avoid the intrinsic problems (i.e., the singular problem of
the positive within class scatter and the Gaussian distribution
assumption for the positive samples) in the original Biased
Discriminant Analysis (BDA) [10], this paper introduces a
Generalized Biased Discriminant Analysis (GBDA) approach
for CBIR, which is mainly based on the Differential Scatter
Discriminant Criterion (DSDC) [11, 22]. The GBDA algorithm
defines the separation of different classes as a trace difference
rather than a trace ratio, which can avoid the singular problem
of the positive within-class scatter in the original BDA. To
avoid the Gaussian assumption for the positive samples, the
GBDA defines the between-class scatter by resorting to
inter-class nearest neighborhood samples, thereby extracting
the most discriminative information. By integrating the
manifold regularization, a smooth and locally consistent
transform can also be learnt for CBIR RF to effectively reduce
the risk of over fitting. Extensive experiments on a large Corel
Image Database of 10,763 images with 80 semantic concepts
have shown that the proposed GBDA significantly outperforms
the original BDA, its enhanced versions (namely, DBDA,
NBDA and MBA), as well as SVM and CSVM.