Trial and error: time to rethink the governance of new technology in policing?

Blog post

Trial and error: time to rethink the governance of new technology in policing?

You might have seen the video circulating of a Metropolitan Police officer fining someone for swearing at officers in Romford. By his own account he had covered his face as he passed a police van equipped with facial recognition cameras and had then apparently been challenged by officers. It’s a case that raises a lot of questions for me and judging by discussions on social media clearly for others too.

In terms of the individual incident I wonder on what grounds the officers stopped the man in the first place. There is a general question about where legally avoiding a police intervention, as in this case or walking around a knife arch in a public place, reaches a threshold for a conversation with a police officer or even grounds for a stop and search. Grey areas would seem to abound, which feels ethically problematic.

More urgently, and my focus here, I wonder about the wider development and deployment of new (particularly automated) technologies in policing; in particular, how public consent is being assessed and ensured, and how local trials such as in London relate to what must be national-level legal and ethical considerations.

The use of facial recognition technology, currently being trialled in public by the Metropolitan Police and South Wales Police (and used in their custody suites by Leicestershire Police), is clearly contentious, and not just because the likes of Liberty and Big Brother Watch say so. The potential for automated surveillance already exists, and the public will quite reasonably have concerns about how the technology is being used now, and how it might be used in the future.

The importance of context

I suspect relatively few people would take issue with the use of facial recognition to find a named terrorism suspect in a crowded place, or a lost child, or someone with dementia who has gone missing, especially if all data relating to negative matches were immediately and permanently deleted.

But what if the technology was harnessed as part of the ‘hostile environment’ policy and used to identify immigration overstayers? Is there a threshold beyond which public consent is lost? How would the public feel about the technology being used to identify people with unpaid fines, or for automated intelligence gathering about young people’s patterns of association, or those joining political protests? Are there important distinctions between tactical deployments, as in the case of the police van in Romford, and the scope for facial recognition technology being mainstreamed, for example if incorporated into body worn video technology?

Public consent must necessarily be grounded in particular contexts, meaning that the question to be asked of the public is not ‘do you approve of the use of facial recognition by the police?’, but rather ‘in this particular instance, and with these conditions, do you approve of the use of facial recognition by the police?’

Public education

As a member of the public, I remain unclear about many of the critical details regarding the police use of facial recognition technology and have the sense that the technology seems to be running ahead of the legal and regulatory frameworks, and especially public education efforts.

In particular, I question how well the police service has communicated its intentions and framed the debate, or for that matter communicated the legal framework within which facial recognition operates, including the rights of members of the public not to be scanned, and to not answer any questions from officers seeking to engage them if they don’t have to.

Similarly, it is unclear what efforts are being made to identify, acknowledge and address bias in the technology, such as error rates by skin tone or gender, which have the potential to skew policing interventions and drive disproportionality and in turn compromise legitimacy.

Others have also highlighted the need for communication. The NPCC lead for data analytics, West Midlands Detective Chief Superintendent Chris Todd, has discussed the need for the police service ‘to bring the public with us’, while the recent Police Foundation paper on ‘data driven policing’ recommends a ‘deliberative democratic’ approach, educating citizens about the complexities before seeking their considered views.

Local versus national governance

Related to communication, ethics and legitimacy, I have a particular concern that contentious technologies like facial recognition appear to be being developed and trialled independently and entrepreneurially by different police forces, with neither an overarching national public information campaign nor an overarching national governance framework for the police service as a whole, to include ethical oversight and assurance. Given the issues at stake it surely cannot be right that this is being allowed to happen (although these structures are in place at a police force level, as with the deliberations of the London Policing Ethics Panel).

Along with ‘big data’, AI and other technology-driven applications, facial recognition offers both immense potential for good, but also huge civil liberties concerns that cannot simply be dismissed with ‘if you’ve nothing to hide you’ve nothing to fear’. This is even more pressing given that it is likely the technology will not be deployed in all areas at once, at least in the years if not decades ahead, but rather will probably focus on communities and locations that are already more heavily policed and surveilled. Here, I’m sure there are parallels with the rollout of automatic car number plate reader (ANPR) technology.

Too important to be left to policing?

My instinct is also that the issues these technologies raise are probably far too important to be left to policing. The 43 territorial forces and their other policing, political and law enforcement counterparts already have difficulty agreeing on common approaches to a range of less contentious subjects. Negotiating questions of ethics and public consent 43 (or more) times seems wholly unsatisfactory.

So what might work better?

As these are societal-level questions, I would be in favour of a single national programme under a common governance framework, with a dedicated national ethics board (perhaps building on the Independent Digital Ethics Panel for Policing). This could be developed under the auspices of and funded by the Home Office – although I do wonder if the remit should go beyond policing and security to the full range of ways facial recognition and related technologies might be deployed, in which case the Cabinet Office might be a more natural government sponsor. There are doubtless balances to be struck between scope and timeliness.

In any case, given that the facial recognition genie is already out of the bottle, a rethink to the current approaches seems increasingly urgent.

Read the Police Foundation’s recent (March 2019) report on Data-driven policing and public value by Ian Kearns and Rick Muir which can be downloaded below.