GetApp offers objective, independent research and verified user reviews. We may earn a referral fee when you visit a vendor through our links.
Our commitment
Independent research methodology
Our researchers use a mix of verified reviews, independent research, and objective methodologies to bring you selection and ranking information you can trust. While we may earn a referral fee when you visit a provider through our links or speak to an advisor, this has no influence on our research or methodology.
How GetApp verifies reviews
GetApp carefully verified over 2 million reviews to bring you authentic software experiences from real users. Our human moderators verify that reviewers are real people and that reviews are authentic. They use leading tech to analyze text quality and to detect plagiarism and generative AI.
How GetApp ensures transparency
GetApp lists all providers across its website—not just those that pay us—so that users can make informed purchase decisions. GetApp is free for users. Software providers pay us for sponsored profiles to receive web traffic and sales opportunities. Sponsored profiles include a link-out icon that takes users to the provider’s website.
Despite widespread privacy concerns, the use of facial recognition technology is spreading rapidly. In this article, we’ll take a look at several of the technology’s most common uses alongside its many risks.
We also conducted an extensive survey to gauge consumer sentiment about facial recognition technology to help businesses make informed decisions about its use.
Facial recognition, a subset of biometric technology, is any system capable of identifying the geometry of a subject’s unique facial features (such as the distance between eyes and mouth width). Facial recognition systems typically use deep-learning algorithms that become more accurate as they are continually exposed to different images of the same faces over time.
For facial recognition tech to work, the system must have a tagged copy of your face for comparison. However, our research finds that less than a third of U.S. consumers are comfortable having their face scanned by private companies (not including healthcare providers).
But, even if you’ve never provided a photo to a particular company, your face may have found its way into various databases that scrape photographs from the internet to train algorithms. Take Flickr as an example—once one of the most popular photo websites on the internet, it was later sold to Yahoo and is now owned by SmugMug. In recent years, photos from Flickr have been compiled into the massive MegaFace database, which is used to train countless facial recognition algorithms.
Most facial recognition applications fall within access control, security surveillance, or identity verification. Here are a few of facial recognition technology’s most common applications, some of which we’ll examine in further detail later in the article.
In 2020 it’s going to become difficult to take an international flight without being exposed to facial recognition at passport checkpoints. The technology is already in use at two dozen major airports, and more are added to the list every month. And it’s not only being used for passport validation; many airlines are leveraging facial recognition for check-in and boarding.
This type of identity verification is already spreading well beyond your nearest airport. In Japan the technology is being used in the Osaka subway system, and Germany has facial recognition plans for 134 train stations.
Facial recognition is increasingly being used to manage access to large events. At the 2020 Consumer Electronics Show (CES) in Las Vegas, attendees were able to register their face in advance to gain entry. Facial recognition will also feature prominently in the logistics of the 2020 Summer Olympics in Tokyo, where it will be required for all athletes and accredited attendees. Organizers claim the technology will bolster security while shortening wait times for venue ID checks.
But it’s not only mega events such as CES or the Olympics that are using this technology. Visitor management software options such as Visitly and GreetLog are available to small businesses seeking to streamline access to events or facilities.
Surveillance is one of the primary uses for facial recognition; many of the technology’s top vendors are specifically focused on this space. Police forces may seek to use this tech to identify a person of interest or locate missing persons. Alternatively, police forces might use it to monitor high traffic areas for general surveillance purposes.
Private companies are also getting in on the action with hundreds of major retailers deploying the technology to reduce in-store theft. Vendors such as FaceFirst and Cognitec are vigorously marketing security solutions aimed directly at retailers.
The retail industry has already discovered several novel uses for facial recognition technology. At some KFCs in China, you can register your face to pay with your smile. Want to buy alcohol at select grocery stores in the U.K.? Facial recognition will verify that you’re of age. Soon, all manner of facial recognition-enabled kiosks will allow you to pay for purchases, print tickets, check loyalty points, and more.
At some Walgreens locations, facial recognition-enabled video screens built into beverage coolers rotate through advertisements based on who is standing in front of the coolers. If the algorithm determines you’re an 18 to 25-year-old man, it might display an ad for an energy drink. Similar technology is being used in Japanese taxis; backseat passengers receive targeted video content based on their faces.
Facial recognition is already being used widely to track attendance, from schools to churches and workplaces. Office solutions replace manual methods of attendance tracking, and school systems can record attendance online or in the classroom. Software such as ClockInEasy and AttendLab cite more accurate attendance tracking and increased productivity as benefits.
Facial recognition emotion detection (also known as emotion analysis or emotion recognition) is being deployed in situations like monitoring hospital patients for pain, ensuring students are paying attention in class, and determining whether someone gets a job. The emotion detection and recognition market is growing quickly with some estimates that it will grow at 40% year-over-year to reach $92 billion by 2024.
Biometric technologies such as facial recognition are supplementing passwords—or replacing them altogether. This application has been mainstreamed in the U.S. by products such as Apple’s FaceID and is being rapidly adopted by banks and financial institutions for secure access to apps and to improve customer experience.
We surveyed U.S. consumers regarding their comfort levels with various applications of facial recognition technology. When we asked about facial recognition for passport control, 75% of respondents voiced some level of comfort. When asked about its use in police surveillance, the number dropped to 46%.
We also asked about building access, attendance tracking, retail purchases, emotion analysis, and personalized advertising. Here's what we heard.
Our data suggests that consumers have one level of comfort with facial recognition when it's used for practical security applications (passport control, building access, attendance tracking), and another when it comes to more subjective, non-security uses (retail purchasing, emotion analysis, personalized advertising).
We also asked consumers what concerns they have—if any—about the use of biometric technologies such as facial recognition. More than any of the other response choices provided, 81% expressed concern about the misuse of biometric data. It’s not a stretch, then, to say that consumer concern about data misuse significantly impacts comfort level with various facial recognition applications.
Businesses considering the use of facial recognition for sentiment analysis or targeted advertising must recognize consumer reluctance to trust these applications. At this stage, early adoption could repel more customers than it attracts.
For retail applications related to purchasing or customer tracking to be successful, customers should have clarity about how their data will be used, as well as the option to not participate. To illustrate what not to do, Chaayos, a popular Indian chai cafe, recently caused controversy over its loyalty program that used facial recognition whether customers agreed to it or not. Customers complained about the apparent deception and decried the lack of an option to opt out.
Facial recognition has endless applications, but it also comes with a lot of risk. Here are a few reasons many people are reluctant to trust it.
Two primary concerns with facial recognition technology are false positives and false negatives. A false positive occurs when the algorithm identifies the wrong person. Conversely, a false negative occurs when the algorithm fails to identify the right person. These errors are generally the result of algorithmic bias that can manifest in ways that are unfair for one group of people relative to another.
When the National Institute of Standards and Technology (NIST) examined the facial recognition algorithms of 99 different software developers, it found that minority groups had disproportionate rates of false positives and false negatives. Similarly, an MIT Media Lab study found that facial recognition algorithms were far less accurate when attempting to identify Black women than when identifying white men.
Do you ever have a smile on your face when you’re not feeling particularly happy? Maybe your face tends to have a default look of displeasure, even when you’re perfectly content. No matter, algorithms aren’t only reading your face, they’re now trying to read your mind.
Vendors claim emotion detection technology can read microexpressions to discern a subject’s emotions at any given time, and they are marketing its use for everything from grading job interviews to reading children’s emotions in the classroom—which has proved difficult.
But despite the high stakes of emotion detection, New York University’s AI Now Institute recently released its 2019 report stating there is “little to no evidence that these new affect-recognition products have any scientific validity.” Additionally, without cultural, personal, or situational context, emotion detection may be fundamentally flawed, as other studies have pointed out.
Furthermore, the Hawthorne Effect—whereby a subject’s behavior is altered merely by being observed—could impact anyone knowingly being subjected to emotion detection. For example, there are now hundreds of user videos online featuring tips on how to ace HireVue’s video interview system, which evaluates applicants using what the AI software company describes as facial analysis.
Controversy surrounding emotion detection has prompted a new Illinois law regulating the use of AI in the hiring process. Similar bills have been proposed around the country.
The use of facial recognition to identify terrorists or locate missing children is among the most practical and valuable applications of facial recognition. But, as these systems are deployed by governments around the world, how can we be sure that they won’t also be used unethically?
In India, police have used facial recognition to film protests and screen for “rabble-rousers and miscreants.” In China, the technology has been used to monitor the activities of its minority Uighur community. During recent events in Hong Kong, facial recognition was met (and in some cases destroyed) by masked protestors attempting to express their views without being identified.
London’s Metropolitan Police announced it will soon begin using live facial recognition cameras to surveil city streets. U.S. police departments have begun using live facial recognition powered by the vast Clearview database, which holds more than 3 billion photos scraped from the internet.
With little or no oversight of these practices, the public must simply trust that these tools will be used responsibly.
Unlike a password, you can’t simply change your fingerprint or create a new face. Once your biometric data is compromised, it’s forever. And biometric data breaches happen just like any other kind. Last year, biometric security services provider Suprema left more than 27 million records in an unprotected and unencrypted database. Fingerprints and facial recognition data were among a variety of highly sensitive information left exposed.
Earlier this year, a Chinese facial recognition company left 1.3 million biometric records of students from 23 different schools in a database open to the internet without security. This follows a 2018 breach that exposed another Chinese company’s insecure facial recognition database housing the records of 2.5 million people. As biometric technologies are increasingly adopted, these types of breaches will likely escalate in number and severity.
Your face is your most identifiable feature. But what happens when your face is all anyone needs to find out anything and everything about you?
It seems we’re all about to find out.
Take Facebook as an example. At one point, the company developed an app that allowed users to find a person’s Facebook profile simply by pointing a phone at their face. Facebook says the app was only tested on employees and has since been discontinued. Similarly, a Russian photographer proved how easily strangers can be identified when he took photos of random people on the subway and used a facial recognition app to find their social media profiles.
Google has developed facial recognition that it claims is ready to go on its image search, but the company has thus far refrained from activating the technology. In fact, Google CEO Sundar Pichai recently took to the Financial Times to voice his concerns about facial recognition and called for the regulation of AI. Meanwhile, Russian search giant Yandex has enabled facial recognition for its image search. Users need only to upload an image of a face and click search.
Privacy advocates and numerous legislators view facial recognition as a form of mass surveillance that has significant potential for abuse. Several major cities have already banned governmental use of facial recognition, including:
San Francisco, CA
Oakland, CA
Berkley, CA
Cambridge, MA
Somerville, MA
Brookline, MA
Soon, both Portland, Oregon and Portland, Maine will vote on similar bans. If enacted, Oregon’s legislation will be the first to also prohibit the use of facial recognition by private companies. Facial recognition has also been banned in other scenarios, from police body cameras to major music festivals.
The U.S. Congress is also looking into facial recognition. The House Oversight and Reform Committee is working on legislation to regulate facial recognition technology and has already held three hearings on the matter. At the same time, several federal bills have been introduced to limit the technology in various ways, and Senator Bernie Sanders has called for a moratorium on its use by police nationwide.
Currently, only Texas, New York, Washington, and Illinois have state-wide biometric privacy laws. Enacted in 2008, Illinois’ Biometric Information Privacy Act (BIPA) is the only biometric privacy law that allows for a private right of action (i.e., individuals can file suit).
That’s why BIPA is the basis for a multibillion-dollar class-action lawsuit against Facebook related to its face-labeling “tag suggestions” feature. Litigation under BIPA is also pending against home improvement retailers Lowes and Home Depot, each of which is accused of surreptitiously scanning shoppers for security purposes.
Meanwhile, the entire European Union is considering a five-year ban on facial recognition to consider how to properly regulate the technology. This follows Sweden issuing the first GDPR fine related to facial recognition in a case involving students whose attendance was tracked at school without their consent.
Our research shows that the regulation of facial recognition and related biometric technologies strongly resonates with the public. Our survey of U.S. consumers finds that:
95% believe they should have the right to opt out of facial recognition systems used by private companies.
97% support the right to know if a private company is in possession of their biometric data.
95% think that private companies should obtain express consent before sharing biometric data with other businesses.
84% believe the use of biometric data by private companies should be regulated by federal law.
These results must be taken into consideration by any company considering the adoption of biometric technology and the mountains of sensitive—and perhaps soon-to-be-regulated—data that it creates.
There’s little doubt that facial recognition will play a significant role in the future of business and society. But for the technology to succeed at scale, consumers must trust that it is accurate, secure, and not needlessly invasive. Improved algorithms, responsible deployment, and practical regulation will help us realize the utility of facial recognition technology while also protecting our privacy.
Check out these additional resources:
The statistics referenced in this article resulted from a survey that was conducted by GetApp in January 2020, among 487 U.S. consumers.
Note: The information contained in this article has been obtained from sources believed to be reliable.This document, while intended to explain facial recognition technology, is in no way intended to provide legal advice or endorse a specific course of action.
Zach Capers