Neural networks have been applied extensively in computer vision and pattern recognition. Support vector machines have been recently proposed as new kinds of feed-forward networks for pattern recognition .
Intutively , given a set of points belonging to two classes , a SVM finds the hyperplane that separates the largest possible fraction of points of the same class on the same side , while maximizing the distance from either class to hyperplane.
This hyperplane is called optimal separating hyperplane which minimizes the risk of misclassifying not only the examples in training set , but also the unseen examples of the test set .The SVM is essentially developed to solve two class problem.
The application of SVMs to computer vision problem have been proposed recently . SVM is trained for face detection where the discrimination is between two classes : face and nonface , each with thousand of examples .
It is difficult to discriminate or recognize different persons by there faces because of similarity of the faces . In this project , we focus on the face recognition problem , and show that the discrimination function learned by SVMs can give much higher recognition than the poular standard eigenface approach or other approach .
After the features are extracted , the discrimination functions between each pair are learned by SVMs . Then the disjoint test set enters the system for recognition .
We will construct a binary tree structure to recognize the testing samples . We will develop a multi-class recognition strategy for the use of conventional bipartite SVMs to solve the face recognition problem .
• Efficiently applying SVM to the n-class problem of face recognition
• Figuring out training and/or image preprocessing strategies
• Comapring how SVMs compare to other techniques
SOFTWARE AND TOOLS / TRAINING DATA :
• Thorsten's SVM Light
• Java Netbeans
• Visual Studio
• Image datasets for training and testing