23-04-2014, 12:42 PM
Fuzzy Support Vector Machines
Support Vector Machines.pdf (Size: 125.67 KB / Downloads: 14)
Abstract
A support vector machine (SVM) learns the decision
surface from two distinct classes of the input points. In many appli-
cations, each input point may not be fully assigned to one of these
two classes. In this paper, we apply a fuzzy membership to each
input point and reformulate the SVMs such that different input
points can make different constributions to the learning of deci-
sion surface. We call the proposed method fuzzy SVMs (FSVMs).
INTRODUCTION
THE theory of support vector machines (SVMs) is a new
classification technique and has drawn much attention on
this topic in recent years [1]–[5]. The theory of SVM is based on
the idea of structural risk minimization (SRM) [3]. In many ap-
plications, SVM has been shown to provide higher performance
than traditional learning machines [1] and has been introduced
as powerful tools for solving classification problems.
An SVM first maps the input points into a high-dimensional
feature space and finds a separating hyperplane that maximizes
the margin between two classes in this space. Maximizing the
margin is a quadratic programming (QP) problem and can be
solved from its dual problem by introducing Lagrangian multi-
pliers. Without any knowledge of the mapping, the SVM finds
the optimal hyperplane by using the dot product functions in
feature space that are called kernels. The solution of the optimal
hyperplane can be written as a combination of a few input points
that are called support vectors.