Seminar Topics & Project Ideas On Computer Science Electronics Electrical Mechanical Engineering Civil MBA Medicine Nursing Science Physics Mathematics Chemistry ppt pdf doc presentation downloads and Abstract

Full Version: gross-pose face recognation
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
gross-pose face recognation

[attachment=27673]

INTRODUCTION
Face recognition is a classical research topic in the computer vision and pattern recognition community. High accuracy can now be achieved under a controlled imaging condition, which means frontal faces with indoor lightings and normal expressions. However, these requirements are not always satisfied in most real world applications. Because the variations of pose, illumination, and expression are very common and uncontrollable, when these variations are present, the performance of current face systems may drop significantly [1]. Thus, the problem of face recognition is far from being solved. Pose variation is one of the bottlenecks in automatic face recognition. The major difficulty of the pose problem is that the variations in facial appearance induced by a pose are as large as or even larger than that caused by differences due to identity. It leads to a special phenomenon that the distance be-tween two faces of different persons under a similar viewpoint is smaller than that of the same person under different viewpoints(see Fig. 1).

Pose variations lead to a special phenomenon that (a) two faces of different persons under a similar viewpoint are smaller than (b) that of the same person under different viewpoints.
This phenomenon explains why many popular face recognition methods such as the eigenface [2] and fisherface [3] fail when confronted with a large pose variation. Because the pose variations divide a face into several linear subspaces, it is not surprising that directly comparing two vectors from different subspaces in such methods will lead to poor performance. Viewed from this point, the key problem for cross-pose face recognition is how to measure the similarity between two different linear subspaces. Directly matching two faces of different poses is quite difficult since a pose varies in 3-D space, but there is only the information of 2-D appearances in the face images. One way to recognizing a face across poses is to reconstruct the 3-D shape of an input face. However, reconstructing the 3-D shape from a single 2-D image is a difficult ill posed problem. An alternative approach to recognize faces across pose differences is to match faces indirectly using the coupled face data as the bridge to connect different poses. In the multipose databases, such as the PIE[4], FERET [5], and Multi-PIE [6], face images are capture dunder different viewpoints for each person. Thus, face images can be coupled by identity across different poses. The basis of linear subspace based approaches for face recognition is that a face image can be represented as a linear combination of images in the training set. Thus, the representation can be formulated as a regression problem. Therefore, if two pose specific subspaces are coupled by identity, the joint face representation across two subspaces can be formulated as a coupled regression problem. From this viewpoint, we revisited the crosspose face recognition. We found that, if the bias and variance in two coupled linear subspaces are balanced simultaneously, the regressors can be more effective for pose invariant face representation. Based on this observation, a regressor based approach is proposed for recognizing faces across different poses. The experimental results show that the proposed method is simple but very effective, and if further enhanced with coarse alignment and local Gabor features, the recognition accuracy can be even close to that of the state of the art method.


RELATED STUDIES

The pose variance is a big challenge in face recognition research. A related-literature survey can be found in [7] and [8].For pose-robust face recognition based on 2-D images, most ofthe approaches fall into two categories, i.e., holistic and local approaches, respectively. The former builds pose invariant models on a whole face, whereas the latter represents a face with local patches. In the holistic approaches, the 3-D morphable model(3-DMM) proposed in [9] is considered state of the art. The 3-DMM is built using principal component analysis on 3-D facial shapes and textures that are obtained from a laser scanner.The 3-D face is reconstructed by fitting the model to the input 2-D image. The 3-DMM can be incorporated in recognition in two different approaches [10]: transforming nonfrontal face images to a frontal view or directly performing recognition by using the coefficients of a morphable model. In addition to the 3-D models, coupled 2-D face images are also used for representing face across poses. In the eigen light-field method proposed in [11], a complete appearance model including all available pose variations is built on 2-D images. Faces in each pose can be represented by the corresponding part of the complete model. The coefficients of the complete appearance model are estimated from a partial model. Face recognition is performed by matching the coefficients. Recently, Prince et al.[12] have represented faces across pose differences using factor analysis. Factors in different poses are tied to construct a hidden “identity subspace” invariant to pose variations. Recognition is performed with probabilistic distance metric modeling based on the tied factors. In the above approaches, recognition is performed using the coefficients of the model. Similar to the 3-D geometric transformation in [10], faces can also be transformed into different poses via regression based on the 2-D appearance models. For instance, in [13], a frontal face model is transformed into nonfrontal for extending the gallery set.
In addition, the holistic approaches modelling faces on a patch level is another popular way to recognizing faces across poses since local patches on a face are considered more robust to pose changes than the holistic models. To our knowledge, the first work of patch-based cross-pose face recognition was proposed by Kanade and Yamada [14]. They divided a face into a patch grid and performed probabilistic modelling on each corresponding patch pair across poses. Cross-pose face recognition is performed by directly summing up the similarities of all the patch pairs. Lucey and Chen [15] improved Kanade and Yamada’s approach by modelling the joint appearance of frontal patches and holistic no frontal images. Although local patches are robust to pose variations, dividing faces into local patches brings the problem of cross-pose patch alignment. To tackle this problem, some studies have been proposed in recent years. Liu and Chen [16] aligned the patches by a simple 3-D ellipsoid model. In [17], a better alignment is learned through a Lucas–Kanade-like optimizing process [18]. Li et al. [19] aligned the patches by a generic 3-D face model and measure the similarity between corresponding patches by canonical correlation analysis. Different from the above patch-based recognition methods, Chai et al. [20] transform nonfrontal patches into frontal ones for virtual frontal-view synthesis,

Faces can be well represented by a linear subspace in a 3-D space. 2-D images are given by the 3-D face via geometric mapping. This can be regarded as a pre-processing step for recognition. Recently, some elastic matching based approaches [21]–[23] have been proposed to tackle the problem of real-world face recognition. Attributing to the densely sampled features on overlapped local patches and Hausdorff-like elastic matching [21], [23] or random forest [22], these approaches have some tolerance to pose variations under near frontal viewpoints. However, these approaches failed to deal with large pose differences. It is worth pointing out that all the aforementioned holistic models can be extended to a series of separated local models. For example, Blanz and Vetter [9] divide the 3-D MM of a whole face into several local models, and Princeet al.[12] also extend their method to local Gabor features on several facial feature points. In addition to using holistic facial features, we also extended the proposed method with local feature representation in this paper. Therefore, the proposed method can be also viewed as a local approach.


CROSS-POSE FACE RECOGNITION BASED ON COUPLED AND BIAS-VARIANCE BALANCED REGRESSORS


In this section, we describe in detail the proposed cross-pose face recognition method based on a bias variance balanced regressor. To evaluate the proposed method under relatively fair experimental conditions, an enhanced method with coarse alignment and local Gabor features is also proposed in this section.

CROSS-POSE FACE RECOGNITION BASED ON COUPLED REGRESSORS

The success of “eigenfaces” proves that 2-D frontal faces can be well represented by a linear subspace. That is to say that, when the training set is fixed, new faces can be represented by the coefficients of the linear combination. The work proposed by Blanz and Vetter [9] further proves that 3-D faces can be also well represented by linear subspaces. As shown in Fig. 2, 2D face images under different viewpoints are synthesized by the 3-D face model. If the factor of illumination was excluded for a specific 3-D face, the differences in its 2-D images are only caused by the differences in viewpoints. Obviously, the coefficients in the space of a 3-D face are pose invariant.