We as human beings have the ability to tell people apart just by looking at their faces. Once the light hits a face and it bounces back into our eyes up to our brains, some biological process happens. Then, we can identify the person.
Our brain’s ability to recognize and remember faces is actually incredible and now machines can do it too. Facial recognition is a blossoming field of technology. Here’s an overview of how it works, where it’s being used and what the implications are for privacy and security.
A computer takes an image and calculates the distance between major structural pieces like your nose and your eyes. It could also take into account the width and curves of your face or the depth of your eye sockets. These measurements are converted to a numerical code called your face print.
Once a computer knows your face print, it looks in its database of images that have already gone through the process to see if it can find any matching codes. Facial recognition was developed using 2D images since they can’t take depth into account.
2D systems really rely on the distances between your facial features or landmarks. But, angles and lighting can cause readings. The distance between your eyes and your nose looks very different in a straight on image versus a profile picture.
Some systems address this issue by mapping a 2D image onto a 3D head and undoing the rotation. 3D cameras sense depth by projecting invisible light onto a face and using sensors to capture the distance of various points of that light from the camera itself.
Apple’s face ID on current iPhones use 30,000 infrared dots to line the contours of your face. Thanks to the extra depth measurements 3D images aren’t as easily fooled by angles. But, they can still be defeated by different expressions or wearing glasses or growing facial hair. A new tech called skin texture analysis could help with that. It still measures the distance between landmarks. But, the scale is much smaller.
It actually measures the distance between pores. Skin texture analysis is pretty new. But, it could eventually be so precise as to reliably tell the difference between twins. Thanks to this infrared, you can actually see the iPhone working to read my face.
Right now it’s not just Apple though, Google has this on the pixel too. Samsung has it on the Galaxy S 9, Facebook uses it to tag you and your friends’ photos.
Google photos uses it to categorize your gallery. It’s what’s behind these face filters on Instagram and snapchat. Even home security cameras like the nest, hello are using it to tell the difference between a stranger and a member of your family. Your face could one day serve as your boarding pass. Passengers flying out of Orlando have already gotten a taste of what that’s like.
But, I’m sure you’ve recognized the problem. A camera in a public place could recognize you without you even knowing about it. One of your biometric identifiers is just kind of out there. And, if an unsecured system grabs an image of you, a hacker could find a way to access that data or maybe the whole reason the camera was there in the first place was to track your location. Whether you wanted it to or not, shopping malls could make personalized recommendations as soon as you walk in if they know your face.
Before facial recognition was being used by the police in places like London and China. It can match faces to a database of mug shots. Taylor Swift used it to watch out for stalkers at one of her concerts. The American Civil Liberties Union has protested the use of facial recognition by government bodies because of its implications.
For a surveillance state, recently Facebook was hit with a lawsuit as their tech ID’d people without their consent. The ring video doorbell came under fire too for filing a patent that would have its cameras monitoring neighborhoods for sex offenders. While facial recognition is cool in theory, there’s loads of potentially problematic consequences to consider the rules regulations and those consequences are all taking shape right now in this exciting field of tech. Image onto a 3D head and undoing the rotation 3D cameras sense depth by projecting invisible light onto a face and using sensors to capture the distance of various points of that light, from the camera itself.