Biometric Warfare: Liveness Detection – First Try

We have already discussed the ideas behind our liveness detection research. Here we will describe our real life experiments.

First version of liveness test was implemented as an interactive procedure using phone frontal camera. During the test user was prompted to:

  1. Look straight into the camera (starting recording)
  2. Move camera horizontally 10-15 centimeters, while keeping the pose and facial expression unchanged
  3. Close right eye (end of recording)

For each stage in the above sequence one frame is extracted and used for the liveness analysis.

Example of frames from the true live test:

Example of spoof attack using targeted person image on computer screen:

Then, three methods were used to verify liveness. For acceptance, test should be validated by all of them

3D scene depth disparity map analysis

To verify that real three-dimensional person is present before the camera we used two pictures taken with camera from two different positions to create stereoscopic effect. Images are converted to gray scale and normalized by intensity, and then OpenCV cascade face detector is used to select the relevant parts of images.

Selected areas are down-scaled for performance reasons. Stereo correspondence between images is established using OpenCV implementation of the block matching algorithm. Algorithm output represents gray scale image where pixel intensity is corresponding to the scene’s objects distance from the camera. For our purposes, we rely on facial anatomy.

Specifically, the nose tip being the closest part to the camera, and therefore disparity map for valid images should have the nose area painted with maximal intensity. To verify this we divide the face area represented on disparity map into blocks forming 5×5 matrix, aggregate intensity values for all the pixels in each block and check if the value of cells [2×2] and [3×2] are indeed containing the maximum value for whole matrix.

Below there is a comparison between depth disparity analysis performed on true live test images, and spoof attack images.

True live test processed

 

Spoof attack processed

 

 

This method shows consistently good results for variety of live tests, but its performance could be affected by differing focus between two images, so quality control in selection of video frames is essential

Eye blinking verification

Eye blinking detection uses pair of images where user required to keep same pose and facial expression, first with both eyes wide open, and then with the right eye closed. Process has two components, verification that the eye is indeed switched from open to closed, and that difference between pictures is consistent with micro movement of live three-dimensional face and is not the same spoofed image with slight modifications.

For the first task we apply OpenCV standard face and eye detection cascade methods to localize left and right eye bounding rectangles within face detection rectangle on an image where both eyes are open. We use thus found right eye rectangle also for the second image in case the eye detection method won’t be able to discover the closed eye. Content of the right eye rectangles from both images are compared using difference operator and resulting intensity map is evaluated for characteristic pattern.

Namely the difference created by absence of the eye iris circular blob in the center of the eye bounding rectangle. This is done, once again as in disparity map analysis by comparing aggregated intensity values in 5×5 matrix and verifying that central cell has a global maximum value.

Second task, verifying that second frame is indeed not the modified spoof image is based on analyzing optical flow between two frames with OpenCV implementation of Lukas-Kanade method. Points of interest displacement patters are checked for uniformity, which could be symptomatic of planar object as compared to less ordered patterns expected from irregular 3-d object.

True live test processed
Spoof attack processed

Single image distortions analysis

Methods implemented for single image distortion analysis are primed to detect following properties characteristic of the spoof attacks on printed and electronic medium:

  1. Specular Reflections, based on difference observed in the reflections of light from real face and a spoof image.
  2. Color Diversity, because genuine frames have richer variety of colors compared to second-hand recapture.
Characteristic color diversity
  1. Blurriness Features, on the assumption that attacker tends to place the camera too close to the medium to cover the boundaries. Extremely short focus distance lead spoofing images to get blurred.
  2. Moiré effect detection is especially relevant when spoof attack is using electronic medium. Due to the overlapping of digital grids from camera and spoof medium characteristic peaks can be detected in discreet Fourier transforms at high frequencies.

Remaining issues

The liveness detection methods we tested need to be verified with existing public databases of spoofing attack replay images. Also there is a need to test these methods on real life, vibrant, ethnically diverse population samples.

For some of the methods used in single image distortion analysis classification criteria separating real images from spoof attacks need to be refined.

Leave a Reply

Your email address will not be published. Required fields are marked *