31-08-2016, 03:12 PM
1451827905-2ndreview1.pptx (Size: 1.2 MB / Downloads: 4)
Introduction
Learning linear subspaces for high-dimensional data is an important task in pattern recognition.
Linear subspace learning is a popular method for dimensionality reduction and feature extraction for many pattern recognition tasks.
In this project we present a linear subspace learning algorithm through learning a discriminative dictionary.
We maintain data items in the dictionary and our text will be divided into vectors based on the matrix format.
LITERATURE REVIEW
K-LDA algorithm to reduce the dimensionality to store data into the dictionary.
This algorithm used for learning jointly reconstructive and discriminative dictionary for sparse representation.
K-LDA takes the information in order to make the sparse representation more discriminate.
We have a dictionary and a training data set.
The proposed algorithm regularizes the level of reconstruction and discrimination of the learned atoms.
K-LDA aims to improve the discrimination power of the sparse representation
RELATED WORKS
Difference between K-SVD and K-LDA
K-SVD is a dictionary learning algorithm for creating a dictionary for sparse representations.
K-SVD is a generalization of the k-means clustering method, and it works by iteratively alternating between sparse coding the input data based on the current dictionary, and updating the atoms in the dictionary to better fit the data.
K-LDA is an algorithm for learning jointly discriminative and reconstructive dictionaries.
K-LDA exploits the information about the labels of training data directly in updating the dictionary atoms.
ALGORITHMS
Linear Discriminant Analysis (LDA)
LDA is a classification method. It is simple, mathematically robust and often produces models whose accuracy is as good as more complex methods.
LDA is based upon the concept of searching for a linear combination of variables (predictors) that best separates two classes (targets).
Linear Discriminant Analysis (LDA) is most commonly used as dimensionality reduction technique in the pre-processing step for pattern-classification and machine learning applications.