Biometrics and the State

eye-2926215_1920Biometrics are a way to measure a person’s physical characteristics to verify their identity. These can include physiological traits, such as fingerprints and eyes, or behavioural characteristics, such as the unique way you might hold your phone. To be useful, biometric data must be unique, permanent and collectible. Once measured, the information is compared and matched in a database.

A few months ago, I was confronted with an issue about the use of my biometric data.  I had enrolled for an IT course with an accompanying exam.  What I was not aware of until halfway through the course was that I would be required to provide biometric information as part of my registration process.  This had been introduced as an anti-fraud measure.[i]  I fully understood and supported the move to clamp down on exam fraud, which weakens the reputation of the exam.  I did however, have some concern about how the data would be stored and who would have access to it.  Which ultimately lead me back to how biometric data is utilised as its use as an authentication method becomes more commonplace.

Given the sensitivity around the concept and use of biometric data, particularly by authorities – it is the very definition of who you are – it is no surprise that its use in law enforcement, particularly in the field of facial recognition, is being subjected to greater scrutiny.  We have no issue with using our fingerprints to unlock our smartphones, but a problem with the state being able to identify us automatically by our faces.

For example, in an incident in central London in early 2019 witnesses said several people were stopped after covering their faces or pulling up hoods to evade detection from facial recognition cameras.  It should be noted that over the time period, eight of these people were ultimately arrested over an eight-hour period.  For some, potentially exercising a right to privacy from the state knowing their whereabouts, was construed as being suspicious by law enforcement.[ii]

A legal challenge, launched against the Metropolitan Police over its use of facial recognition in June 2018, argued that the use of automatic facial recognition violates articles eight, ten and eleven of the European Convention on Human Rights – guaranteeing the rights to private life, freedom of expression, assembly and association – and is neither proportionate nor necessary.  Baroness Jenny Jones who launched the action with the group Big Brother Watch said: “This new form of surveillance lacks a legal basis, tramples over civil liberties, and it hasn’t been properly debated in parliament… The idea that citizens should all become walking ID cards is really the antithesis to democratic freedom.”[iii]

It is also recognised that there are higher error rates for facial recognition on females with darker skin.  Artificial intelligence technologies are only as good as the data used to train them. If a facial recognition system is to perform well across all people, the training dataset needs to represent a diversity of skin tones as well as factors such as hairstyle, jewellery and eyewear.[iv]  As a result, the introduction of such technology could well disproportionately impact particular demographics, which is a potential minefield when it comes to law enforcement.

Is there an example of where unrestricted use of this technology could lead?  Looking at China, the Sharp Eyes project continues to gather momentum. Sharp Eyes is a surveillance system, powered by facial recognition and artificial intelligence.[v]  The Chinese government’s stated aim is that by 2020, a national video surveillance network will be operational with “global coverage, network-wide sharing, full-time availability, and full control”, according to an official paper released in 2015.  The name of the government’s 2020 project — xueliang, or “sharp eyes” — is a throwback to a Communist party slogan, “The people have sharp eyes”, referencing the totalitarian ploy of encouraging neighbours to spy on their neighbours.

What is to stop the state acting in a similar vein in the UK?  Regulation, legislation and transparency of the actions of government is part of the mechanism that we use to keep the state accountable.  At the time of writing, the use of biometrics by the police in the UK is regulated by the Biometrics Commissioner.  His role is to keep under review the retention and use by the police of DNA samples, DNA profiles and fingerprints.  When the Home Office released its Biometrics Strategy in June 2018 he published a response, which expressed concern that the Home Office had not been forthcoming regarding any future plans it has for the use of biometrics and the sharing of biometric data.   The Surveillance Camera Commissioner also expressed similar concerns in his Annual Report (2017/18).

This would suggest that the policy makers (and by definition the government) of this country may not be ready for the ethical and possible legal challenges that may be thrown up by the use of this technology.  We would not expect that technology used in this way be treated as a “black box”.  It should be transparent and held to account.  Any biometric technology to be used in such a way should be tested in a standard way and be comparable to other similar types.  Technical specifications should be published, and false positive, false negative and crossover error rates should be publicly available.  In this way, if an individual is being linked to an event (e.g. a crime), there should be a way for innocent people to challenge the accusations, regardless of whether they have been made by a human or a machine.

There is also, of course, the question of the data retention policy for people not charged with any crime.  We should consider whether biometric data should be deleted automatically for innocent people, rather than placing the burden on them to contact the relevant authorities and request its removal.  This also leads on to the nature of intelligence, and what the need is for a relatively liberal country to keep detailed information on citizens that have not been charged of a crime, much less convicted.  There are many different areas to consider, but nowadays with the emphasis on restricting the expansion of police budgets, the use of intelligence and technology is seen as preferable to paying for police officers to perform routine activities.

As the pace of technology advances, the application of that technology will continue to throw up issues that may be divisive and have a far-reaching sociological impact.  This is evident in cybersecurity, where safety and liberty are uncomfortable bedfellows.  I believe that as practitioners, we should not just be looking at how technology can be introduced to solve any problem, we must also have an eye to the ethics that surround that problem.

References

GOV.UK: Surveillance Camera Commissioner’s Speech to the annual data privacy conference

GOV.UK: Surveillance Camera Code of Practice

Information Commissioner’s Office: Police, justice and surveillance

[i] Biometric testing in exam industry works to stamp out fraud

[ii] Police stop people for covering their faces from facial recognition camera

[iii] BBC News: Police facial recognition system faces legal challenge

[iv] Microsoft improves facial recognition technology to perform well across all skin tones, genders

[v] LA Times: China’s new surveillance program aims to cut crime. Some fear it’ll do much more

Leave a comment