Seven police forces in England and Wales are currently using live facial recognition on a regular basis, while others have deployed it for large events or on a trial basis. This dangerously authoritarian surveillance is a threat to our privacy and freedoms – it has no place here in Britain. Below are expert responses to questions about police use of this authoritarian technology.
Live facial recognition (LFR) matches faces on live surveillance camera footage against a police watchlist in real time. Before a deployment, the police will prepare a watchlist, which can comprise of police-originated images, such as custody images from the police national database, and non police-originated images, for example publicly available, open source images or information shared by other public bodies.
At the deployment, a camera will capture a live video feed, from which the LFR software will detect human faces, extract the facial features and convert them into a biometric template to be compared against those held on the watchlist.
The software generates a numerical similarity score to indicate how similar a captured facial image is to any face on the watchlist. Police set a threshold for these scores, and any matches above this threshold are flagged to the police, who may then decide to stop the individual.
It is important to draw a distinction between live facial recognition and retrospective facial recognition. Live facial recognition uses a real-time video feed to biometrically scan the faces of members of the public almost instantaneously.
Retrospective facial recognition (“RFR”) uses facial recognition software on either still images or video recordings taken in the past and compares the detected faces to photographs of known individuals held on photo databases. Whilst RFR may have limited use as a forensic investigatory tool, subject to strict regulation, live facial recognition amounts to constant generalised surveillance. The British public would not be comfortable with fingerprint scanners on our high streets; there is no reason we should accept being subjected to a biometric police line-up either.
LFR presents a radical departure from long-held principles embedded in UK policing practices that serve to protect our rights and freedoms. It is a fundamental principle that individuals should not have to identify themselves to the police, unless they are suspected of criminal behaviour.
LFR reverses this protection by indiscriminately subjecting innocent members of the public to mass identity checks. During a deployment, everyone who passes by a LFR camera is treated like a potential suspect who requires vetting by law enforcement, which undermines the presumption of innocence. It is unimaginable that the police would randomly stop members of the public going about their daily lives to check their fingerprints against databases for a potential match; there is no reason that LFR should perform the same function with intrusive automated biometric software.
The deployment of LFR technology threatens privacy both on an individual level and as a societal norm. The AI-powered mass surveillance software takes biometric scans of each individual who passes by the camera and retains photos of those flagged by the system – even where no further action is taken by the police. The significance of a technology which subjects us all to constant surveillance that does not only record our whereabouts and activities, but can also identify us in real-time as we go about our daily lives, cannot be understated. In recent years, parliamentarians across parties in Westminster, members of the Senedd, rights and equalities groups and technology experts across the globe have called for a stop to the use of this technology.
There is a particularly chilling, and entirely foreseeable, risk that the UK’s vast CCTV camera network could be updated with LFR-enabled cameras. With at least 6 million surveillance cameras already in the UK, which gives us a similar CCTV camera-to-population ratio as China, it would be logistically possible to implement this system in future. The Metropolitan Police Service’s (MPS) LFR policy asserts a belief that the force can and may lawfully do so. Many legal experts disagree. Such a system of surveillance would produce a level of biometric intrusion that, whilst embraced by authoritarian regimes, has never before been contemplated in a democracy. It is hard to understate the chilling effect that this would have. LFR has already been used by the police to target protestors, for example at an anti-arms fair demonstration in Cardiff and against climate activists during the Formula One British Grand Prix at Silverstone, and deployed at other high-profile events attracting peaceful demonstrations, such as the Coronation of King Charles.
This intrusive surveillance, and the resulting privacy concerns, has had the effect of deterring those who might ordinarily participate in these lawful democratic activities and as such constitutes an interference with the protected Article 10 right to freedom of expression and Article 11 right to freedom of assembly.
The software is also open to abuse. Despite some protections being provided by the Data Protection Act 2018 and UK GDPR, we have already seen police using private facial recognition systems without scrutiny.1 The complete asymmetry of power and absence of regulation sets a dangerous precedent, whereby officers can know everything about an individual from a cursory LFR search without any prior authorisation.
The police rely on ‘common law powers’ as the basis for using LFR. This is an unusual and, according to many experts’ view, inadequate basis upon which to justify such widespread deployment of intrusive biometric surveillance. Other forms of biometric processing, such as fingerprinting and DNA testing, are subject to strict controls and oversight deriving from specific legislation.
From a rule of law perspective, it is imperative that the law is clear, intelligible and predictable and protects fundamental human rights.1 The differing broadly-drawn policies adopted by various police forces do not adhere to these requirements and amount to officers writing their own rules for how to use the technology. Many of the Standard Operating Procedures of forces who use LFR also point to the GDPR and Equality Act as further sources of legal authority, however, in effect, these contain broad, general duties for all public authorities – they are in no way enabling Acts and do not contain specific regulation of live facial recognition. It remains that the words ‘facial recognition’ are not contained in a single Act of Parliament.
In 2020, the Equality and Human Rights Commission (EHRC) recognised the limitations of the legislative backdrop upon which police rely to use LFR, which it described as “insufficient” and called on the UK government to suspend LFR. In its submission to the UN Human Rights Committee, the EHRC raised concerns “as to whether it [LFR] complies with the requirement that any interference with privacy rights under Article 17 ICCPR is both necessary and proportionate,” noting that, “it is based solely on common law powers and has no express statutory basis”. In April 2024, the EHRC again expressed concerns about “the impact of police forces deploying surveillance at mass scale.”
In 2020, the Court of Appeal ruled in Bridges v South Wales Police that South Wales Police’s use of LFR had been unlawful due to “fundamental deficiencies” in the legal and policy framework, which gave rise to a breach of privacy, data protection and equality laws. The case was brought by civil liberties campaigner, Ed Bridges, who found himself in the vicinity of two South Wales Police LFR deployments between 2017-2018. The Court of Appeal held that the interference with Mr Bridges’s privacy rights was not “in accordance with the law,” as there were no clear limitations on where the software could be used or who could be put on a watchlist, giving officers an overly broad discretion. The Court of Appeal decision was not concerned with the issue of the proportionality of the technology per se.
In May 2024, Shaun Thompson, who was misidentified by the MPS during a LFR deployment, initiated a judicial review of the force’s use of LFR, similarly arguing that its use of the software is unlawful.
Live facial recognition suffers with well-documented issues relating to accuracy and race and gender bias. In addition to Mr Thompson’s legal challenge, in our observation work, Big Brother Watch has witnessed the MPS misidentify children in school uniforms using LFR and then subject them to lengthy, humiliating and aggressive police stops in which they were required to evidence their identity and provide fingerprints. In two such cases, the children were young black boys and both children were scared and distressed. One of these stops was also witnessed by the MPS’ Independent Reviewer, Professor Peter Fussey, who remarked on how “distressed and clearly intimidated” the child was.
In April 2023, the MPS and SWP commissioned a report into the accuracy of their LFR system from the National Physical Laboratory (NPL). The Home Office and all seven police forces deploying LFR have relied on the NPL report to suggest that there is no statistically significant difference between the technology’s performance across different demographics. This is despite the fact that the study tested only one algorithm, but not all police forces use the same LFR software; the performance of various algorithms differ significantly. We have serious concerns about the limitations of this report and how its findings have been presented by the police.
The report found that the police’s LFR software is in fact less accurate for women and people of colour, although this bias reduces when a higher accuracy threshold is set to limit match alerts, specifically above the 0.60 threshold. However, police forces have operated the technology at lower settings in the past and there are no safeguards to prevent them from doing so in the future. At the 0.60 threshold – at which various forces operate the software – 13 Black and Asian individuals were still falsely flagged, whilst no white individuals were misidentified. There is a greater margin of error when LFR technology is deployed in real-life settings, as compared to laboratory test conditions. As the number of faces scanned grows, even a small probability means that hundreds, if not thousands, of individuals could be wrongly flagged and be forced to prove that they are not who the technology says they are.
Given that misidentifications are more likely to impact certain groups, this could exacerbate the existing inequalities in policing. We have found this concern borne out during our observations of deployments. In February 2024, Shaun Thompson, a black anti-knife crime community worker, was misidentified by the Metropolitan Police’s facial recognition system, which was deployed near London Bridge. He was held up by the police for almost 30 minutes, as they made him prove he was not the individual on the watchlist by showing multiple ID documents. Police also demanded scans of his fingerprints and threatened him with arrest when he declined. Mr Thompson is now bringing a legal challenge against the MPS’ use of facial recognition, arguing that it constituted an unlawful interference with his Article 8 right to privacy. The MPS maintains that its use of LFR is, and was, lawful.
The related issues of who is included on police watchlists and where deployments take place are also cause for concern. The MPS has deployed LFR in Croydon more than any other borough since trials began in 2016. Singling out black and brown communities for increased biometric surveillance risks reinforcing existing biases in policing, or as Shaun Thompson put it, acts as “stop and search on steroids”.
Different forces have different procedures for where LFR deployments can be located. Hampshire and Essex have highly permissive standard operating procedures (SOPs) which allow for LFR to be deployed anywhere where there is a “likelihood”1 or “reasonable grounds to suspect” that at least one person on the police watchlist will attend the proposed location during the deployment. With large watchlists, statistically, this would mean that LFR could be deployed constantly – it is an ineffective and meaningless ‘safeguard’. Bedfordshire and Northants’s SOPs are even more vague, stating that the Authorising Officer is responsible for choosing the location, and the decision of where to locate a LFR deployment in North and South Wales must – rather ambiguously – be “informed by the intelligence case”.
The MPS has recently updated its policy, providing three categories for the circumstances in which LFR can be deployed: to support in the policing of crime hotspots, to support particular types of protective security operation (PSO), and to respond to a specific intelligence case. Despite their name, “crime hotspots” are not meaningfully targeted. A crime hotspot is defined as a geographical area of approximately 300-500m, where the MPS’s “crime data”, “intelligence reporting” and/or “operational experience as to future criminality” indicates the crime rate is in the upper quartile. In practice, this includes all areas with large numbers of people and heavy footfall, meaning that the technology could be deployed almost anywhere and the police would be able to justify it on this basis of their policies. The Met’s permissive policy is also written in a way such that it allows officers not only to rely on objective data to determine areas which constitute a “crime hotspot”, but also their subjective “operational experience.” Clearly, this cannot be scrutinised in the same way as hard data and gives a substantial amount of scope to officers to make decisions on the location of deployments on the basis of unconscious bias.
In respect of the second category – supporting PSOs – LFR can be used within and up to an approximate 300m radius of the external boundary area of critical national infrastructure or event, or at the nearest practicable location to the nearest operational transport access points to those locations. Again, this is an incredibly broad category, given that “critical national infrastructure” is not defined by the policy. Under the policy, LFR can also be used where it has been assessed on the basis of a specific intelligence that a person who is eligible for inclusion on a LFR Watchlist is likely to be at that location. However, under this category, a location may be decided on the basis of a suspicion that just one individual on the watchlist is likely to be at the location, and then the watchlist will include a very large number of additional individuals.
Different forces also have different policies on who may be included on a watchlist. Most police forces list those who are currently “suspected or wanted for offences” as a watchlist category, but do not impose any seriousness threshold, which means even low-level offenders, such as minor road traffic offences, would meet the criteria for being put on the list.
Several policies allow individuals who are subject to court orders, including those who are subject to bail conditions and prevention orders, to be included on watchlists and stopped and questioned by officers. The policies of some forces include provisions allowing them to target “those who pose a risk of harm.” In the past, police have used LFR technology to target protesters and people with mental health conditions. The vague wording of policies means that there is no reason why this could not be repeated. Several police forces also allow those who are “vulnerable” and “at risk” to be put on watchlists, and the MPS even permits the “victims of serious crime” to be included.
In practice, this means that the range of individuals who can be placed on a watchlist is incredibly broad and it is not just serious offenders who can be flagged, stopped by the police and forced to prove their identity under fear of arrest. Indeed, it is unlikely that a serious criminal, who knows they are wanted by law enforcement, would freely walk towards a group of police officers and a large van with LFR signage. The technology operates by requiring individuals to walk closely past the police, rather than the police pursuing suspects. It is therefore likely to be eluded by the most dangerous offenders. This is evidenced by a stabbing which took place just 100m from an LFR deployment in Croydon.
Arguably, LFR is highly inefficient and consumes excessive resources for police. There is no requirement that police exhaust or even explore more proportionate options to speak to an individual or apprehend a suspect before adding them to a LFR watchlist. An article in The Times claimed that just under one arrest results from two hours of use of LFR. However, the Times article’s analysis is misinformed and misleading. The “just under one arrest per two hours” claim actually describes the periods of police inactivity associated with LFR deployments, involving the large numbers of officers and technical staff (approx. 20 per deployment) who wait for alerts – meaning every live “hour” is at least 15-20 live officer hours.
Further when considering how much time goes into a deployment, it is not only the dozens of policing hours spent by each officer to manage a deployment that should be taken into account, but also the copious hours of police work that goes into authorising its use and preparing impact assessments due to the processing of highly sensitive data, populating watchlists, preparing the software and hardware (including cameras, a van, mobile devices for each officer and, occasionally, mobile lighting) and distributing signage.
It should be emphasised that the individuals who make up watchlists are already known to the police. Instead of proactively making inquiries at these suspects’ addresses, which would be a more proportionate option, forces deploying LFR are dedicating an exceptional amount of time and resource on waiting for criminals to come to them.
There is also a significant divergence between the level of intrusion associated with traditional policing versus facial recognition surveillance. Whilst LFR is comparable to other forms of biometric identification, such as fingerprinting and DNA testing, it is far more invasive, as it is linked to deeply personal identifying feature (i.e., an individual’s face) and is deployed in public settings, often without the consent or knowledge of the person being subjected to checks.
From our observations, we have seen how this dynamic changes the relationship between the police and the communities who are policed. Having an automated surveillance software mediate these interactions undermines the Peelian principle that “the police are the public and that the public are the police” and embeds within policing a Kafkaesque possibility of being stopped at any moment without knowing why, but having to effectively prove your innocence.
It is noteworthy that LFR is most enthusiastically embraced by authoritarian regimes, like Russia and China, whilst other democratic countries have taken measures to restrict its use. Several US states and cities have implemented bans and restrictions on the use of LFR and the EU has implemented the AI Act, which prohibits the use of LFR for law enforcement purposes, except in the most serious and strictly defined cases with a requirement of judicial authorisation. States must pass domestic law in order to use LFR and each use must be reported to data protection authority.
This is a far cry from the UK’s unregulated approach and lack of oversight.
When the LFR software flags a potential match on the watchlist, an officer will stop the individual for further questioning. Despite this human involvement, our observations of deployments have highlighted how officers put significant trust in LFR software, even when it has clearly made a mistake.
This was evident in the case of Shaun Thompson, as aforementioned, who was misidentified by the MPS during an LFR deployment near London Bridge. Whilst one plain clothes officer quickly realised that Mr Thompson was not the same person as the image on the police watchlist, and others remarked that he did not have the same “very distinctive” facial scars as the person sought by the police, the officers continued to detain him, noting “this thing [the LFR software], is normally really, accurate.” Our observations are supported by the conclusions of a July 2019 review of LFR trials by the MPS’s Independent Reviewer, Professor Peter Fussey, which stated that officers place significant weight on LFR flags in their operational decision-making. This produces situations in which individuals have been misidentified but are nonetheless subjected to invasive questioning and scrutiny because officers do not think the software can get it wrong.
We all have something to fear from the rise of mass surveillance technology in the UK. The use of LFR by the police raises big questions about the state of our democracy and, with the dawn of new technology, what kind of society we want to live in. Knowing that the police could obtain a biometric scan of your face without requiring suspicion as you walk down the high-street has a potentially chilling effect on the behaviours many citizens are ordinarily willing to engage in – including lawful activities which are essential to democratic participation, such as attending peaceful demonstrations.
As several case studies demonstrate, the claim that individuals who have nothing to hide have nothing to fear is untrue: Shaun Thompson had nothing to hide but was, nonetheless, detained by the police after he was misidentified by the MPS’s LFR systems. Placing the onus on individuals to prove their identity and prove their innocence puts us at all at risk of having to defend ourselves against false accusations in the event we are wrongly flagged.
The Met Commissioner has suggested that LFR enjoys broad public support, citing the figure that 60% of people reported being “comfortable” with “using biometrics to identify criminal suspects in crowded areas.” However, this figure is misleading due to the use of a leading question, which could, in fact, refer to a range of technologies, including manual fingerprint checks.
The Mayor of London’s independent London Policing Ethics Panel conducted a separate poll, in which 37% of 16-24 year olds and 25% of 25-39 year olds in London reported that they would “stay away from events where I know LFR would be used.”1 This discomfort with biometric surveillance is even more pronounced among people of Asian and mixed ethnicity in London, of whom almost a third responded that they would not attend venues where LFR was in use. It is unlikely that the general public would support those who are neither suspected of a crime, nor wanted for arrest – such as victims and associates – being placed on police watchlists.
The Ada Lovelace Institute has conducted nuanced research on public opinion towards LFR, concluding that although police press releases have raised awareness of LFR, this familiarity with LFR does not equate to an increased knowledge about the software. The research also indicated that the vast majority of people who have a view on the technology would like to have the ability to opt out; 61% of the public oppose the use of LFR in public transport and most people believe that LFR should be subject to limits.
The technology has also received significant cross-party backlash and condemnation from civil society. In October 2023, 65 Parliamentarians and 32 rights and race equality groups in the UK called for an immediate stop to LFR for public surveillance.
For more information contact:
info@bigbrotherwatch.org.uk