Late last year, while researching material for a TedX talk I was giving on biometric data, I came across an example that chilled me. In summer 2015, billboards in Hong Kong were plastered with life-sized
Financial Times: The biometric threat of advertisers using your data
Ogilvy & Mather HK - 'The Face of Litter' from Work that works on Vimeo.
In the UK, retailers, hospitals, airports, museums and casinos are using facial recognition for security and access, while banks such as HSBC have unveiled voice recognition to replace traditional passwords.
But despite biometrics being hailed as a smarter, more secure alternative to passwords, the risks of misuse and hacking are enormous. This type of data is near-impossible to change because it is encoded in your biology.
Once collected, it points to you and you alone; once lost or stolen, it is open to permanent abuse. You can always rethink a password but you can't rewrite your DNA.
Security specialists have long pointed to the fallibility of biometric systems, showing that they can be fooled, and have also outlined the risks of biometric data hacks and leaks. But last week, we discovered that these risks were extremely real.
Facial recognition data and more than a million fingerprints were discovered on a publicly accessible site belonging to Suprema, a company that is used by banks, governments and the UK's Metropolitan Police.
Suprema provides its biometric platform to an access control business called Nedap, which serves 5,700 organisations in 83 countries, according to a report in The Guardian. "Once stolen, fingerprint and facial recognition information cannot be retrieved. An individual will potentially be affected for the rest of their lives," said VPNMentor, the research company that found the flaw in the database.
The risks of further large-scale biometric leaks are steadily increasing as we flood companies, large and small, with our biometric data. And just as our online browsing behaviour has become the primary currency of the internet, biometric data is increasingly being monetised. Facebook uses facial recognition on our photos to identify people in the background of images, whether they are Facebook users or not.
Smart speakers such as Amazon Echo (aka Alexa) and Google Home are moving towards individual voice recognition. One online dating start-up, Pheramor, matches up potential couples using their DNA. In July, the FT reported that Pampers and Verily, Google's life sciences business, were designing smart nappies to collect data from infants while they sleep, wee and poo.
The harvesting of biometric data from sometimes vulnerable populations has raised concerns about the potential for mass surveillance. Privacy activists have criticised the UN's Refugee Agency for fingerprinting refugees who enter the Democratic Republic of Congo — a practice that, they say, increases the risk of surveillance, discrimination and exploitation.
Part of the solution is ensuring that companies have stringent cyber security procedures, such as fingerprint and face hashing, where the data is encoded in a way that can't readily be reversed.
That may entail legislation, to mandate that the collection and storage of biometric data protects people's privacy. But rather than relying on companies to safeguard our data, or on governments to regulate its use, it is incumbent on each of us to question the steady trickle of our biological information into the hands of profit-led corporations.
Written by: Madhumita Murgia
© Financial Times