1.CVSSP, University of Surrey, Guildford
GU2 7XH, Surrey, UK
2.CMP, Czech Technical University, Prague, Czech Republic.
The system presented here is a real-time application for face verification developed at the Centre for Vision, Speech and Signal Processing, University of Surrey. This prototype system was implemented with the objective of evaluating a new technique for face-based person identification. Possible applications of this software include access control for buildings, alarm verification and advanced interfaces for different tele-services such as tele-shopping and tele-banking. The images shown below are screen captures of the system. Click on the images to download the corresponding AVI files. There are four different client accesses illustrating the robustness to various geometrical and photometrical changes as well as occlusion. In addition to these, there are also two impostor accesses in which an unauthorized person is trying to access.
|Lighting [3.3 Mb]||Occlusion [2.2 Mb]||Scale [2.7 Mb]|
|Scale [2.4 Mb]||Impostor [2.5 Mb]||Impostor [2.8 Mb]|
The scenario is as follows. The user selects a client ID (the listbox in the upper right corner), faces the camera and presses the button marked "Verify". A single frame is captured and matched with the set of database images corresponding to the claimed identity. For each comparison, the corresponding image is shown to the right and the match score is displayed under the status bar. When all comparisons have been made the four most similar images are shown (the image in the top left corner got the highest score, then top right, bottom left and bottom right). If the highest score exceeds a preset threshold the client is accepted. Note that the scores can theoretically vary within the full interval [0.0,1.0] but that in reality the majority of the scores will be in a much more narrow interval (typically [0.6,0.8]).
For information about the method used in the verification and the M2VTS face database please refer to the related publications. The system was developed within the framework of the European ACTS project M2VTS. The authors also acknowledge the Centre for Machine Perception, Czech Technical University, for providing facilities in the final stages of the implementation.