Drexel University Home Pagewww.drexel.edu DREXEL UNIVERSITY LIBRARIES HOMEPAGE >>
iDEA DREXEL ARCHIVES >>

iDEA: Drexel E-repository and Archives > Drexel Theses and Dissertations > Drexel Theses and Dissertations > 3D face structure extraction from images at arbitrary poses and under arbitrary illumination conditions

Please use this identifier to cite or link to this item: http://hdl.handle.net/1860/1294

Title: 3D face structure extraction from images at arbitrary poses and under arbitrary illumination conditions
Authors: Zhang, Cuiping
Keywords: Electric engineering;Human face recognition (Computer science);Image processing
Issue Date: 16-Mar-2007
Abstract: With the advent of 9/11, face detection and recognition is becoming an important tool to be used for securing homeland safety against potential terrorist attacks by tracking and identifying suspects who might be trying to indulge in such activities. It is also a technology that has proven its usefulness for law enforcement agencies by helping identifying or narrowing down a possible suspect from surveillance tape on the crime scene, or quickly by finding a suspect based on description from witnesses. In this thesis we introduce several improvements to morphable model based algorithms and make use of the 3D face structures extracted from multiple images to conduct illumination analysis and face recognition experiments. We present an enhanced Active Appearance Model (AAM), which possesses several sub-models that are independently updated to introduce more model flexibility to achieve better feature localization. Most appearance based models suffer from the unpredictability of facial background, which might result in a bad boundary extraction. To overcome this problem we propose a local projection models that accurately locates face boundary landmarks. We also introduce a novel and unbiased cost function that casts the face alignment as an optimization problem, where shape constraints obtained from direct motion estimation are incorporated to achieve a much higher convergence rate and more accurate alignment. Viewing angles are roughly categorized to four different poses, and the customized view-based AAMs align face images in different specific pose categories. We also attempt at obtaining individual 3D face structures by morphing a 3D generic face model to fit the individual faces. Face contour is dynamically generated so that the morphed face looks realistic. To overcome the correspondence problem between facial feature points on the generic and the individual face, we use an approach based on distance maps. With the extracted 3D face structure we study the illumination effects on the appearance based on the spherical harmonic illumination analysis. By normalizing the illumination conditions on different facial images, we extract a global illumination-invariant texture map, which jointly with the extracted 3D face structure in the form of cubic morphing parameters completely encode an individual face, and allow for the generation of images at arbitrary pose and under arbitrary illumination. Face recognition is conducted based on the face shape matching error, texture error and illumination-normalized texture error. Experiments show that a higher face recognition rate is achieved by compensating for illumination effects. Furthermore, it is observed that the fusion of shape and texture information result in a better performance than using either shape or texture information individually.
URI: http://hdl.handle.net/1860/1294
Appears in Collections:Drexel Theses and Dissertations

Files in This Item:

File Description SizeFormat
Zhang_Cuiping.pdf5.11 MBAdobe PDFView/Open
View Statistics

Items in iDEA are protected by copyright, with all rights reserved, unless otherwise indicated.

 

Valid XHTML 1.0! iDEA Software Copyright © 2002-2010  Duraspace - Feedback