Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: An Efficient Algorithm for Multi-class Support Vector Machines
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Abstract
A novel algorithm for multi-class support vector machines
(SVMs) is proposed in this paper. The tree constructed
in our algorithm consists of a series of two-class
SVMs. Considering both separability and balance, in each
iteration multi-class patterns are divided into two sets according
to the distances between pairwise classes and the
number of patterns in each class. This algorithm can well
treat with the unequally distributed problems. The efficiency
of the proposed method are verified by the experimental results.
1. Introduction
Support vector machines (SVMs) become very popular
learning machines for their attractive features, since they
were developed by Vapnik[10],[11]. By solving a quadratic
programming (QP) problem, one can construct an optimal
hyperplane to classify two class patterns. While there are
so many reports on SVMs for two-class pattern recognition
problems (in this paper we call them two-class SVMs), how
to construct an effective multi-class SVM is still an ongoing
research issue and is solicitously expected to be improved.
The previous works on multi-class SVMs basically can
be divided into two categories. For a k-class (k > 2) pattern
recognition problem, the first category methods aim
at constructing a QP problem by considering all classes
at once[12],[1]. The obtained optimization problem is
quadratic in (k − 1) × l variables (l denotes the number
of the training patterns). Thus, training of this multi-class
SVM is usually very time-consuming. And also compared
to other methods, the size of the obtained classifier is large.
So it is inefficient in real applications although its mathematical
formulation is very beautiful.
Instead of solving one large QP problem, the other methods
are solving a series of two-class SVMs to construct
many decision functions. Some typical methods are described
as follows. The one is “one-versus-rest (OVR)”
method [11],[8]. It constructs k two-class SVMs. The ith
SVM (i = 1, 2, . . . , k) is trained with all training patterns,
where ith class patterns are labeled by 1 and the rest patterns
are labeled by −1. The class of an unknown pattern
x is determined by arg maxi=1,2,...,k fi(x), where fi(x)
is the decision function of the ith two-class SVM. Obviously,
both the training and test phase of this method are
usually very slow. The other one is called by “one-versusone
(OVO)” method [3]. It constructs all possible two-class
SVM classifiers. Each classifier is trained on only two out
of k classes. Thus, there are k(k − 1)/2 classifiers. In test
phase, a Max Wins algorithm [2] is used: each classifier
casts one vote for its favored class, and finally the class
with most votes wins. The number of the classifiers is increased
superlinearly when k grows, so that OVO becomes
slow on the problem where the number of classes is large.
Thereafter, a method based on a Directed Acyclic Graph,
which is often called by DAG, was presented [6]. The training
of DAG is same as OVO. Differ from OVO, DAG uses
tree-structured two-class SVMs to determine which class
an unknown pattern belongs to.