Despite video footage and smartphone shots of the Boston Marathon bombing suspects, the FBI still had to resort to the old-fashioned “do you know this person?” technique to identify the men. Modern facial-recognition software alone can’t process fuzzy images, but when Marios Savvides, head of Carnegie Mellon’s CyLab, applied his super-resolution algorithm to an image of Dzhokhar Tsarnaev (not for the FBI investigation), the rendering was the 56th best match out of 50 000 faces. Savvides expects a fuller version of the software to be available within three years. – RACHEL Z ARNDT
Image 1 (Surveillance)
After collecting photos taken by witnesses and scouring footage from nearby closed-circuit TV cameras, FBI officials posted this image, among others, in an attempt to get the public to help identify the suspects.
But all of the first images were too low-quality, with too little digital visual data, for facial-recognition software to process. Facial recognition requires at least 60 to 70 pixels between the eyes; most of the images had only from 12 to 20.
Image 2 (Algorithm)
Savvides and other CyLab researchers used an algorithm to extrapolate what the suspect might look like from straight on. Unlike other software that fills in missing details of faces, the CyLab software doesn’t rely on symmetry alone, but instead tries to create a 3D version of a face from a 2D photo. “If you assume symmetry, you’ve thrown away biometric information,” Savvides says.
Image 3 (Mug shot)
Frontal photos are the most helpful in facial recognition because of what they’re compared with in search of a match: driver’s licence photos and mug shots, which are always taken from straight on. For surveillance cameras to get that kind of shot, says Brian Martin, director of biometric research at security company MorphoTrust USA, “you want to set up the cameras in a way that people are walking through a choke point in a certain direction and at a certain angle”.