It wasn’t that long ago when experts saw facial recognition technology as the “it” thing in the biometric security landscape. The premise was simple and sensible enough: with the right algorithm, the unique composition of any person’s face can function as a digital key and a security token. It’s an incredible modern convenience, but how do we make sure that these measures work as intended? Plus, while remaining in the possession of their rightful owners?
The growing trend of utilising automated facial recognition (AFR) in a variety of sectors has raised numerous concerns. What started as a novel smartphone feature started seeing applications in more sensitive areas such as online banking and surveillance. Law enforcement also began to incorporate AFR with body cam technology for their officers. Naturally, this has led to questions about the accuracy and legitimacy of AFR.
Facial Nuances Are Tricky
For biometric security to be viable, it must be accurate. Unfortunately, facial recognition software is not immune to false positives. In fact, they can perform rather poorly: earlier in February, the British Metropolitan Police deployed facial recognition tech on 8,600 pedestrians in London (without consent, by the way, in a clear infringement of privacy). The system, which interfaced with the Met Police database, generated eight alerts. Here’s the kicker: only one of them was an accurate identification that led to an arrest. In other words, the error rate was a whopping 87.5 per cent.
AI experts have also highlighted the limitations of AFR systems when deployed across racial and gender lines. Dr Fanglin Wang, head of the artificial intelligence unit at Advance.AI, stressed the importance of creating localised data sets when we spoke to him about this. “If facial recognition technology is to work in Singapore or Southeast Asia, for instance, you need to train the underlying algorithm with the relevant local data,” he said. “That means exposing it to Singaporean or Southeast Asian faces, as their facial structures and skin tones are quite different from those of Caucasians.”
False positives and negatives are often the result of low quality images that are either blur or poorly lit. Dr Wang pointed to advancements in technology, such as better sensors and depth control on cameras, as being the solution to this problem. He also believes that AFR accuracy has the potential to supersede that of the human eye given “the right data”.
Trusting The Keymakers
And what about those who manage the software? Every security system comes with a backend. In the case of AFR technology, building and maintaining a database of images is crucial to operations. Therein lies the question: how will such a database be compiled? Policies on personal data such as Singapore’s Personal Data Protection Act impose strict guidelines on how personal information such as facial images can be gathered and used. As a result, AFR developers sometimes come under fire for using unsavoury methods to collect images.
On 10 March 2020, Vermont attorney general Thomas J Donovan filed a lawsuit against data broker Clearview AI over alleged privacy violations. The New York-based facial recognition app company is currently under investigation for creating a searchable database containing billions of facial images harvested from social media platforms. The database also allowed users to upload specific facial images and view matches. Other individuals in Illinois and New York have since joined in the lawsuit.
Clearview AI’s initial defence was that it only gave access to law enforcement agencies. Investigations, however, have revealed that users included corporations, wealthy individuals and investors; American billionaire John Catsimatidis has openly admitted to using the Clearview AI app to spy on his daughter’s date in an article published by The New York Times.
In response to the scandal, sites such as YouTube, Facebook, Twitter and LinkedIn have issued cease-and-desist letters to Clearview AI demanding that it halt its data mining. Apple has also suspended the company from its developer programme.
Trading Privacy For Convenience
Two other major concerns over privacy involve the risk of security breaches and cross-platform integration. A data breach can easily lead to identity fraud by granting unauthorised access to sensitive information such as banking details. There is no such thing as a hack-proof system, especially when one considers the tantalising nature of the payoff.
Cross-platform integration, while often seen as a boon for most software, can be damaging when it concerns facial recognition. Integrating with digital services may seem innocuous, but what happens when the same occurs with public surveillance systems or CCTVs? While the potential risks are clear, it will be close to impossible to establish probable cause. This means that policy intervention may not be timely or sufficient.
Singapore’s Adoption Of AFR
These concerns have become quite prudent in the local context, given that Singapore plans to launch an islandwide facial recognition service by 2022.
“Any potential vendor is subjected to scrutiny over data storage and security,” Dr Wang shared. “Before deploying any technology, extensive testing around performance standards takes place to understand the limitations of the technology.” Furthermore, private organisations will not be privy to Singaporeans’ biometric data, as outlined in the National Digital Identity programme.
The Singapore government also proposed updates to its AI Governance Framework at Davos 2020 in January in the interest of accountability, transparency and fairness. According to Dr Wang, this means that “human involvement must be paired with AI technology to ensure accountable decision-making”. He maintained that the public’s right to privacy and the appropriate usage of their data must always be top priority.
Navigating the issues surrounding AFR technology will not be a walk in the park by a long shot. The strength of the rhetoric is currently equally divided between both critics and supporters of the trend. For now, what it boils down to is individual risk and the progression of technology. But as AFR shifts its focus away from the consumer and towards platforms, will society be able to maintain its appetite for sacrificing personal data on the altar of convenience?