Definition

face detection

What is face detection?

Face detection, also called facial detection, is an artificial intelligence (AI)-based computer technology used to find and identify human faces in digital images and video. Face detection technology is often used for surveillance and tracking of people in real time. It is used in various fields including security, biometrics, law enforcement, entertainment and social media.

Face detection uses machine learning (ML) and artificial neural network (ANN) technology, and plays an important role in face tracking, face analysis and facial recognition. In face analysis, face detection uses facial expressions to identify which parts of an image or video should be focused on to determine age, gender and emotions. In a facial recognition system, face detection data is required to generate a faceprint and match it with other stored faceprints.

How face detection works

Face detection applications use AI algorithms, ML, statistical analysis and image processing to find human faces within larger images and distinguish them from nonface objects such as landscapes, buildings and other human body parts. Before face detection begins, the analyzed media is preprocessed to improve its quality and remove images that might interfere with detection.

Face detection algorithms typically start by searching for human eyes, one of the easiest features to detect. They then try to detect facial landmarks, such as eyebrows, mouth, nose, nostrils and irises. Once the algorithm concludes that it has found a facial region, it does additional tests to confirm that it has detected a face.

To ensure accuracy, the algorithms are trained on large data sets that incorporate hundreds of thousands of positive and negative images. The training improves the algorithms' ability to determine whether there are faces in an image and where they are.

Diagram of how face detection works in a facial recognition app.
Face detection software detects faces by identifying facial features in a photo or video using machine learning algorithms. It first looks for an eye, and from there it identifies other facial features. It then compares these features to training data to confirm it has detected a face.

Face detection methods

Face detection software uses several different methods, each with advantages and disadvantages:

Viola-Jones algorithm. This method is based on training a model to understand what is and isn't a face. Although the framework is still popular for recognizing faces in real-time applications, it has problems identifying faces that are covered or not properly oriented.

Knowledge- or rule-based. These approaches describe a face based on rules. Establishing well-defined, knowledge-based rules can be a challenge, however.

Feature-based or feature-invariant. These methods use features such as a person's eyes or nose to detect a face. They can be negatively affected by noise and light.

Template matching. This method is based on comparing images with previously stored standard face patterns or features and correlating the two to detect a face. However, this approach struggles to address variations in pose, scale and shape.

Appearance-based. This method uses statistical analysis and ML to find the relevant characteristics of face images. The appearance-based method can struggle with changes in lighting and orientation.

Convolutional neural network-based. A convolutional neural network (CNN) is a type of deep learning ANN used in image recognition and processing that's designed to process pixel data. A region-based CNN, also called an R-CNN, generates proposals on a CNN framework that localizes and classifies objects in images. These proposals focus on areas, or regions, in a photo that are similar to other areas, such as the pixelated region of an eye. If this region of the eye matches up with other regions of the eye, then the R-CNN knows it has found a match. However, CNNs can become so complex that they "overfit," which means they match regions of noise in the training data and not the intended patterns of facial features.

Single shot detector (SSD). While region proposal network-based approaches such as R-CNN need two camera shots -- one for generating region proposals and one for detecting the object of each proposal -- SSDs only require one shot to detect multiple objects within the image. Therefore, SSDs are faster than R-CNN. However, SSDs have difficulty detecting small faces or faces farther away from the camera.

Diagram of steps in the deep learning process.
Face detection uses deep learning, a complex but effective AI approach.

Some techniques used in face detection applications include the following:

  • Background removal. If an image has a plain, mono-color background or a predefined, static one, removing the background can reveal the face boundaries.
  • Skin color. In color images, skin color can sometimes be used to find faces; however, this might not work with all complexions.
  • Motion. Using motion to find faces is another option. In real-time video, a face is almost always moving, so users of this method must calculate the moving area. One drawback of this approach is the risk of confusion with other objects moving in the background.

A combination of these strategies can provide a comprehensive face detection method.

Uses of face detection

Face detection has several uses, including the following:

Facial recognition. This technology uses face detection to go a step further, actually recognizing a person's face and identifying them.

Entertainment. Face detection is often used in movies, video games and virtual reality. Facial motion capture is used in face detection to electronically convert a human's facial movements into a digital database using cameras and laser scanners. This database can be used to produce realistic computer animation for movies, games or avatars.

Smartphones. Most smartphones use face detection to autofocus cameras for taking pictures and recording videos. Smartphones can also use face detection in place of passcodes. For instance, users of Apple iPhone X and later models can use face detection to unlock their phones.

Biometric authentication for smartphones: fingerprint authentication, voice recognition, and facial recognition and retinal scanning.
Face detection is used in addition to other biometrics in smartphones to identify users and grant access control.

Security. Face detection is used in security cameras to detect people who enter restricted spaces or to count how many people have entered an area. An additional use is drawing language inferences from visual cues -- a form of lip reading. This can help computers determine who is speaking and what they're saying, which helps with security applications. Furthermore, face detection can be used to determine which parts of an image to blur to ensure privacy, and used by public security cameras to map streets and the people on them in real time.

Marketing. The technology also has marketing applications, such as displaying specific advertisements when a particular face is recognized, or detecting emotions when customers react to products or services.

Emotional inference. Another application for face detection is as part of a software implementation of emotional inference, which can help people with autism understand the feelings of people around them. The program reads the emotions on a human face using advanced image processing.

Biometric identification. Similar to how face detection is used with smartphones, it can be used in e-commerce and online banking to verify identities based on facial features. It can also be used to control access to physical facilities.

Social media. Social media apps use face detection to determine the identities of people in photos and to suggest tagging them. This was one of the first mainstream uses of face detection.

Industry of health. Face recognition can also be used in healthcare to facilitate patient check-ins and checkouts, maintain security, grant access control to restricted areas, and evaluate patients' emotional state.

Advantages of face detection

As a key element in facial imaging applications, such as facial recognition and face analysis, face detection creates various advantages for users, including the following:

  • Improved security. Face detection improves surveillance efforts and helps track down criminals and terrorists. Personal security is enhanced when users use their faces in place of passwords, because there's nothing for hackers to steal or change.
  • Easy to integrate. Face detection and facial recognition technology is easy to integrate, and most applications are compatible with the majority of cybersecurity software.
  • Automated identification. In the past, identification was manually performed by a person; this was inefficient and frequently inaccurate. Face detection allows the identification process to be automated, saving time and increasing accuracy.

Disadvantages of face detection

Face detection also holds various disadvantages, including the following:

  • Massive data storage burden. The ML technology used in face detection requires a lot of data storage that might not be available to all users.
  • Inaccuracy. Face detection provides more accurate results than manual identification processes, but it can also be thrown off by changes in appearance, camera angles, expression, position, orientation, skin color, pixel values, glasses, facial hair, and differences in camera gain, lighting conditions and image resolution.
  • A potential breach of privacy. Face detection's ability to help the government track down criminals creates huge benefits. However, the same surveillance can let the government observe private citizens. Strict regulations must be set to ensure the technology is used fairly and in compliance with human privacy rights.
  • Discrimination. Experts have raised concerns about face detection's inaccuracy in recognizing people of color, mostly women, and how that issue could result in falsely connecting people of color with crimes they didn't commit. These worries are part of a broader concern about racial biases in machine learning algorithms.
Graphic featuring quotes from four Black professionals on racism in AI.
Racial bias in AI systems is a major concern among many professionals.

Face detection vs. face recognition

The terms face detection and face recognition are often used interchangeably, and they both pertain to face identification. However, facial recognition is actually an application of face detection -- albeit one of the most significant ones. Facial recognition software is used for unlocking phones and mobile apps as well as for biometric verification. The banking, retail and transportation industries use facial recognition to reduce crime and prevent violence.

In short, face recognition technology goes beyond detecting the presence of a human face to determine whose face it is. The process uses a computer application that captures a digital image of an individual's face -- sometimes taken from a video frame -- and compares it with images in a database of stored records.

Popular face detection software

Among the face detection software programs available are the following:

  • Amazon Rekognition is a cloud-based service that identifies individuals in real-time video streams and pairs individual metadata with faces.
  • Dlib is an ML toolkit used in security, surveillance and image analysis.
  • Google Cloud Vision API provides basic face detection and identification in photos and videos.
  • Megvii Face++ is often used for access control, e-commerce and social media.
  • Microsoft Face API is a cloud-based service that identifies and tracks individuals in pictures and video streams and can analyze facial features.
  • OpenCV is an Open Source computer vision library used in academic and commercial applications.

The future of face detection

The capabilities of face detection are quickly growing due to the use of deep learning and neural networks. These algorithmic approaches are driving face recognition systems to more accurate, real-time detections. They're also enabling pairings with other biometric authentications, such as fingerprints and voice recognition, for advanced security.

However, developers and companies have slowed down some advancements, such as the ability to detect emotion through facial features, because of concerns over ensuring the responsible and ethical use of AI. For instance, Microsoft removed emotional recognition abilities from services such as Azure.

Many experts cite ethical and privacy concerns in arguments against the further development of face detection and AI in general. Most significantly, face detection and facial recognition can be used without consent or a detected person's awareness. In addition, the risk of false positives is a problem.

Even supporters of AI, such as Elon Musk, have urged temporary halts to the development of AI systems, including face detection technology, citing ethical considerations and concern about unforeseen negative consequences.

History of face detection

The first computerized face detection experiments were launched in 1964 by American mathematician Woodrow W. Bledsoe. His team at Panoramic Research in Palo Alto, Calif., used a rudimentary scanner to scan people's faces and find matches in an attempt to program computers to recognize faces. The experiment was largely unsuccessful because of the computer's difficulty with pose, lighting and facial expressions.

Major improvements to face detection methodology came in 2001, when computer vision researchers at the Mitsubishi Electric Research Laboratories Paul Viola and Michael Jones proposed a framework to detect faces in real time with high accuracy. The Viola-Jones framework is based on training a model to understand what is and is not a face. Once trained, the model extracts specific features, which are stored in a file so that features from new images can be compared with the stored features at various stages. If the image under study passes through each stage of the feature comparison, then a face has been detected and operations can proceed.

The Viola-Jones framework is still used to recognize faces in real-time applications, but it has limitations. For example, the framework might not work if a face is covered with a mask or scarf, or if the face isn't properly oriented, the algorithm might not be able to find it. Recent years have brought advances in face detection using deep learning, which outperforms traditional computer vision methods.

Face detection is a technology at the cutting edge of AI. Learn the seven key benefits of AI for business.

This was last updated in April 2023

Continue Reading About face detection

Dig Deeper on Enterprise applications of AI

Business Analytics
CIO
Data Management
ERP
Close