- Is the use of FRT by police in compliance with international human rights?
- Is it reliable? Is it discriminatory?
- Is the human-in-the-loop “safeguard” really a safeguard?
- Is adequate FRT training provided to police officers to mitigate against risks?
- Are the disproportionately impacted communities consulted adequately?
- Are people informed that they are being subjected to FRT?
- Has the relevant information about the system been disclosed to those accused based on FRT?
Is the use of FRT by police in compliance with international human rights?
Many of our rights are enshrined in the International Covenant on Civil and Political Rights (ICCPR), the European Convention on Human Rights, the European Charter on Fundamental Rights (EU Charter) and the Inter-American Convention on Human Rights (IACHR).1See Articles 8–11 ECHR, Articles 12, 17, 18, 19, 21 and 22 ICCPR, and Articles 11, 12, 13, 15 and 16 IACHR.
These rights are not absolute. However, according to international human rights law, generally speaking our rights may only be restricted or limited as long as the restriction:
- Is provided or prescribed by law and is not arbitrary;
- Pursues a legitimate aim;
- Is strictly necessary in a democratic society to achieve the aim in question; and
- Is proportional to the legitimate aim.
→ Prescribed by law and not arbitrary
The meaning of “law” in this context implies certain minimum requirements of clarity, precision, accessibility and predictability. This is to allow individuals to foresee the consequences of their actions and regulate their behaviour and conduct accordingly, but also to safeguard against states arbitrarily interfering with people’s rights when they exercise their power. As such, a law providing for the use of FRT – and therefore the processing of biometric facial data – that is not public cannot be considered a law, while merely passing a law allowing FRT use which fails to meet the basic requirements of clarity and accessibility in the first place, cannot be considered “lawful”. Any interference with a right must have a legal basis, and that legal basis must be of sufficient quality to protect against arbitrary interferences.
- Example: In the case of Europe, the European Data Protection Board has stated the following in respect of police use of FRT:
“The legal basis must be sufficiently clear in its terms to give citizens an adequate indication of conditions and circumstances in which authorities are empowered to resort to any measures of collection of data and secret surveillance. A mere transposition into domestic law of the general clause in Article 10 LED2Article 10, Law Enforcement Directive: Processing of special categories of personal data: “Processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation shall be allowed only where strictly necessary, subject to appropriate safeguards for the rights and freedoms of the data subject, and only: (a) where authorised by Union or Member State law; (b) to protect the vital interests of the data subject or of another natural person; or (c) where such processing relates to data which are manifestly made public by the data subject.” would lack precision and foreseeability.”3Guidelines 05/2022 on the use of facial recognition technology in the area of law enforcement, European Data Protection Board, p. 5, https://edpb.europa.eu/system/files/2023-05/edpb_guidelines_202304_frtlawenforcement_v2_en.pdf.
- Example: In the UK, where police use live and retrospective FRT, there is no explicit or dedicated legal basis for the use of FRT by police. Instead the police rely on a range of other pieces of legislation and common law powers.4The Data Protection Act 2018, Regulation of Investigatory Powers Act 2000, Protection of Freedoms Act 2012, Human Rights Act 1998, Equality Act 2010, Police and Criminal Evidence Act 1984.
- Example: Research into 38 FRT initiatives in 9 Latin American countries found just 14 of them indicated an existence of regulations to support the use of FRT; even then, most of the regulations did not strictly provide for FRT use but, rather, broadly allowed for powers to use FRT (“to oversee compliance with provisions on evasion in public transport”, for “immigration verification functions, foreigners and immigration control”, etc.)5Venturini, J and Garay, V (Nogueira, P trans.) Facial recognition in Latin America Trends in the implementation of a perverse technology, AlSur, 2021, https://www.alsur.lat/sites/default/files/2021-10/ALSUR_Reconocimiento%20facial%20en%20Latam_EN_Final.pdf.
→ Pursues a legitimate aim
The rights to privacy, freedom of expression and freedom of association come with limitation clauses. For example, an interference with the right to privacy under the European Convention on Human Rights could only be considered legitimate if it is “in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others”.6Article 8 (2), European Convention on Human Rights. But if there is no evidence that a person’s conduct has any link with a legitimate aim, then there is little or no justification for an interference with their rights.
- Example: Article 6 of the EU’s Law Enforcement Directive (LED) obliges data controllers to distinguish between different categories of data subjects (i.e. people). On this, the European Data Protection Board has said:
“With regard to data subjects for whom there is no evidence capable of suggesting that their conduct might have a link, even an indirect or remote one, with the legitimate aim according to the LED, there is most likely no justification of an interference. If no distinction according to Article 6 LED is applicable or possible, the exception from the rule of Article 6 LED has to be rigorously considered in the assessment of the necessity and proportionality of the interference.”7Guidelines 05/2022 on the use of facial recognition technology in the area of law enforcement, European Data Protection Board, p. 23, https://edpb.europa.eu/system/files/2023-05/edpb_guidelines_202304_frtlawenforcement_v2_en.pdf.
It added that distinguishing between different categories of data subjects is “an essential requirement when it comes to personal data processing involving facial recognition” due to the potential for false positive and false negative hits.
→ Necessary in a democratic society to achieve the aim in question
The final part of the three-part test involves identifying the range of rights engaged by the interference in order to determine the necessity or otherwise of the measure, and to determine whether or not the interference appropriately or inappropriately undermines other competing rights. As part of this assessment, a proposed measure should be supported by evidence describing the problem that is being addressed by the measure, how the measure will be genuinely effective in addressing the problem, a determination as to whether or not the measure is the least intrusive measure to address the problem and an explanation as to why existing measures cannot address the problem. Whenever we consider a surveillance measure, tool or law, especially one as powerful, at scale, intrusive and invasive as FRT in all its use cases, we must consider whether or not it is efficacious. This is because a surveillance tool or measure’s efficacy speaks to its necessity and proportionality. In addition, if a proposed measure includes the processing of sensitive data, such as biometric facial data, a higher threshold should be applied to the assessment of effectiveness.
- Example: In Glukhin v Russia, the European Court of Human Rights held that Russia’s use of FRT to identify and apprehend a peaceful protester breached the protester’s privacy and freedom of expression rights. The court held the applicable domestic legal provisions did not meet the “quality of law” requirement and that the processing of Mr Glukhin’s personal data using FRT could not be regarded as “necessary in a democratic society”. Specifically, it noted: “The domestic law does not contain any limitations on the nature of situations which may give rise to the use of facial recognition technology, the intended purposes, the categories of people who may be targeted, or the processing of sensitive personal data. Furthermore, the Government did not refer to any procedural safeguards accompanying the use of facial recognition technology in Russia, such as the authorisation procedures, the procedures to be followed for examining, using and storing the data obtained, any supervisory control mechanisms or the available remedies.”8Glukhin v Russia, App. No(s),11519/20, https://hudoc.echr.coe.int/eng?i=001-225655.
- Example: In S. and Marper v the United Kingdom,9S. and Marper v. the United Kingdom [GC], nos. 30562/04 and 30566/04, § 102, ECHR 2008-V. the European Court of Human Rights held that the UK’s indefinite retention of biometric data (in this case, fingerprints and DNA samples) of people charged but not convicted was not necessary in a democratic society. Holding that the UK failed to strike a fair balance between competing public and private interests, it stated: “The Court finds that the blanket and indiscriminate nature of the powers of retention of the fingerprints, cellular samples and DNA profiles of persons suspected but not convicted of offences, as applied in the case of the present applicants, fails to strike a fair balance between the competing public and private interests and that the respondent State has overstepped any acceptable margin of appreciation in this regard. Accordingly, the retention at issue constitutes a disproportionate interference with the applicants’ right to respect for private life and cannot be regarded as necessary in a democratic society.”
→ Is proportional
If the necessity test is not satisfied, there is no need to carry out a proportionality test. However, if the necessity test is satisfied, then a proportionality test must be carried out. The principle of proportionality is based on the idea that a measure should not exceed what is necessary to achieve the objective. As such, the measure’s advantages should not be outweighed by its disadvantages. A test assessing the proportionality of a measure on a case-by-case basis must assess the importance of the objective and whether the measure meets the objective, must assess the scope, extent and strength of the interference, and must examine what safeguards are in place in respect of the measure, in order to reduce the rights risks associated with the measure.
- Example: As stated by the European Data Protection Supervisor: “At the core of the notion of proportionality lies the concept of a balancing exercise: the weighing up of the intensity of the interference vs the importance (‘legitimacy’, using the wording of the case law) of the objective achieved in the given context.
“A well-performed test needs the express identification, and structuring into a coherent framework, of the different elements upon which the weighting depends, in order to be complete and precise.”10 EDPS Guidelines on assessing the proportionality of measures that limit the fundamental rights to privacy and to the protection of personal data, 2019, https://edps.europa.eu/sites/default/files/publication/19-02-25_proportionality_guidelines_en.pdf.
Is it reliable? Is it discriminatory?
There are several steps involved in the use of FRT, whether the use is live, retrospective or operator-initiated. Each stage presents opportunities for risks of error/misidentification and discriminatory, disproportionate and unnecessary surveillance. As such, all stages have to be carefully considered when there is an assessment of how FRT impacts people’s fundamental rights,11EFF, Electronic Privacy Information Center (EPIC) and the National Association of Criminal Defense Lawyers (NACDL), amicus brief in State of New Jersey v. Francisco Arteaga, https://www.eff.org/document/state-new-jersey-v-francisco-arteaga. and given FRT has clearly demonstrated racial and gender biases,12Buolamwini, J, and Gebru, T, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification”, Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 2018, http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf. See also Deborah Raji, I, and Buolamwini, J, “Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products”, Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, https://dl.acm.org/doi/10.1145/3306618.3314244. See also Cook, C, Howard, J, Sirotin, Y, Tipton, J, and Vemury, A, “Demographic Effects in Facial Recognition and Their Dependence on Image Acquisition: An Evaluation of Eleven Commercial Systems”. IEEE Transactions on Biometrics, Behavior, and Identity Science, 2019 https://ieeexplore.ieee.org/document/8636231. See also NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software, 19 December 2019. NIST wrote: “How accurately do face recognition software tools identify people of varied sex, age and racial background? According to a new study by the National Institute of Standards and Technology (NIST), the answer depends on the algorithm at the heart of the system, the application that uses it and the data it’s fed – but the majority of face recognition algorithms exhibit demographic differentials. A differential means that an algorithm’s ability to match two images of the same person varies from one demographic group to another.” https://www.nist.gov/news-events/news/2019/12/nist-study-evaluates-effects-race-age-sex-face-recognition-software. See also Findley, B, “Why Racial Bias is Prevalent in Facial Recognition Technology”, Harvard Journal of Law and Technology, November 2020, https://jolt.law.harvard.edu/digest/why-racial-bias-is-prevalent-in-facial-recognition-technology. these biases must always be remembered when each step of an FRT use is considered. The answers to the following questions may also have a bearing on the aforementioned tests as to whether a use of FRT is provided by law and not arbitrary: does it pursue a legitimate aim? Is it strictly necessary in a democratic society? And is it proportionate to the legitimate aim? INCLO is grateful for the research, audit assessments, model laws and ethical frameworks created by Radiya-Dixit,13Radiya-Dixit, E, A Sociotechnical Audit: Assessing Police Use of Facial Recognition, Cambridge: Minderoo Centre for Technology and Democracy, 1 October 2022, https://doi.org/10.17863/CAM.89953. Davis, Perry and Santow,14Davis, N, Perry, L & Santow, E, Facial Recognition Technology: Towards a model law, Human Technology Institute, The University of Technology, September 2022 https://www.uts.edu.au/sites/default/files/2022-09/Facial%20recognition%20model%20law%20report.pdf. and Lynch, Campbell, Purshouse and Betkier15Lynch, N, Campbell, L, Purshouse, J & Betkier, M, Facial Recognition Technology in New Zealand: Towards a Legal and Ethical Framework, Law Foundation of New Zealand, 30 November 2020, https://doi.org/10.25455/wgtn.17204078.v1. in helping to identify and shape these questions.
Probe image
This involves a police officer, via a mobile device, CCTV or another source, obtaining an image of a person to run against a database of images of known people, using FRT. Questions to be considered in respect of this step include:
- Where does the probe image come from?
- Why was this image chosen?
- Who took the image?
- How old is the probe image?
- Was it taken from CCTV, a body-worn camera, a mobile phone or a social media account?
- Was it lawfully/legally obtained, stored and shared?
- What is the quality of the image? How high is the resolution? How good is the lighting?
- Is the person looking directly at the camera in the image?
- Is anything obstructing the person’s face?
- Who had access to the image before it was used in an FRT search?
- Is there a legal basis for processing the image in a manner which leads to the extraction of the facial features of the person in the image, thereby creating biometric data?
- Is there a robust and effective oversight mechanism to safeguard the fundamental rights of the person whose image is being probed?
Reference database selection
Similar to probe images, reference database selection plays a significant role not only in the reliability of a retrospective FRT search but also in how discriminatory a system can be in terms of who is subjected to an FRT search. The nature of FRT and the use of reference databases inescapably put innocent people at risk of being misidentified, but they also carry the risk of surveillance. For example, in Buenos Aires, Argentina, the city’s live FRT system, which was set up to search for fugitives, was found to be unconstitutional for several reasons, not least because some 15,459 people who were not fugitives were included in the database without judicial approval. This resulted in FRT searches of the president, politicians, human rights activists and journalists – not just fugitives.16 Naundorf, K, The Twisted Eye in the Sky Over Buenos Aires, Wired, September 2023, https://www.wired.com/story/buenos-aires-facial-recognition-scandal/. Questions to be considered include:
- How was the database of images compiled?
- Are there clear, objective and limited criteria with regard to who is added to the database?17Radiya-Dixit, E, A Sociotechnical Audit: Assessing Police Use of Facial Recognition, Cambridge: Minderoo Centre for Technology and Democracy, 1 October 2022, https://www.mctd.ac.uk/wp-content/uploads/2022/10/MCTD-FacialRecognition-Report-WEB-1.pdf. See also Guidelines on facial recognition, Consultative Committee of the Convention for the protection of individuals with regard to automatic processing of personal data, Convention 108, Council of Europe, June 2021, https://rm.coe.int/guidelines-facial-recognition-web-a5-2750-3427-6868-1/1680a31751.
- Has there been a fundamental rights risk assessment carried out in respect of the people whose images are in the database?
- Where did the images originate?
- What technical and security measures are in place to secure the database and prevent wrongful access to it?
- If the database of images is sourced from the criminal justice system, what is the legal basis for using those images?
- If the database of images is sourced from the criminal justice system, are the images of people who have been arrested, charged and/or convicted?
- If the database of images is sourced from the criminal justice system, is there a legally defined retention period for storing those images?
- If the database of images is sourced from the criminal justice system, is there a risk that a disproportionate number of people from a particular community will be in that database because members of that community are already overrepresented in the state’s prisons? (If this is the case, members of those communities will face a higher risk of being misidentified by an FRT search.)
- Does the police force have a code of practice and/or written guidance regarding the retention and deletion of photographs taken by members of the police of people arrested and/or convicted?
- What is the legal basis for either creating or accessing this database?
- Was the database lawfully created and have the images been lawfully stored and processed?
- How old are the images in the database? Are they up to date?
- Were the images taken from CCTV, body-worn cameras, mobile phones or social media accounts?
- What is the quality of the images? How high is the resolution? How good is the lighting?
- Is the database composed of images of people looking directly at the camera?
- Is anything obstructing the faces of the people whose images are in the database?
- What safeguards are in place to protect the security of the database?
- Who has access to the database?
- Is there a logging policy to document who accesses the database, when and how?18Davis, N, Perry, L & Santow, E, Facial Recognition Technology: Towards a model law, Human Technology Institute, The University of Technology, September 2022, https://www.uts.edu.au/sites/default/files/2022-09/Facial%20recognition%20model%20law%20report.pdf.
- Is there a legal basis for processing the images in the database in a manner which leads to the extraction of the facial features of the people in the database, and a legal basis to allow for an FRT search in respect of a probe image?
- Is the person in the probe image not in the database? (If this is the case, then all estimated matches returned by the system will be incorrect.)
- When a person’s image is no longer necessary for the initial purpose of being placed in the database, is it deleted in a timely and effective manner?
- Is there a robust and effective oversight mechanism to safeguard the fundamental rights of the people whose images are in the database?
Watchlists
Watchlists are the reference databases used in the real-time application of FRT, which involves comparing a live camera video feed of faces against a predetermined watchlist to find a possible match that generates an alert for police or the user to potentially act upon instantaneously, near-instantaneously or without a significant delay. The questions that need to be considered regarding who should and should not be included in a watchlist are the same as those that need to be considered when probe images and reference databases are chosen or populated. Additional questions that need to be asked, for each live use of FRT, include:
- What is the legal basis for placing a person’s image on a watchlist for each specific use of live FRT?
- Where will the law enforcement authority get the images to add to a watchlist?
- Will those images be of a certain quality or age?
- How long can an image be on a watchlist?
- How can a person be removed from a watchlist?
- What specific criteria must be met before a person’s image is added to a watchlist, or before any watchlist is constructed?19 Radiya-Dixit, E, A Sociotechnical Audit: Assessing Police Use of Facial Recognition, Cambridge: Minderoo Centre for Technology and Democracy, 1 October 2022, https://www.mctd.ac.uk/wp-content/uploads/2022/10/MCTD-FacialRecognition-Report-WEB-1.pdf.
- What protocols must be followed before a person’s image is added?
- Who can ask, and give, permission for a person’s image to be added to a watchlist per deployment?
- How will the law enforcement authority assess and demonstrate that the creation of a watchlist, or the addition of a person to a watchlist, is necessary and proportionate?
- How will a person know if they are on a watchlist?
- What remedy will be available to them if they are wrongfully placed on a watchlist?
- What remedy will be available to them if they are wrongfully misidentified and subjected to an action because of an error in the creation of the watchlist?
- Will the specific details of how and when a person is added to a watchlist be made public for each deployment of live FRT?
Potential photo editing/tampering
There are documented cases of police authorities editing probe images before running them through a retrospective FRT search, putting the reliability of a search result at considerable risk and, therefore, risking the rights of individuals subsequently apprehended on the basis of the search results. Research from the USA shows the New York Police Department’s Facial Identification Section, who are allowed to edit probe images, use tools to do things such as removing a facial expression, inserting a different person’s eyes into the image and/or using a tool to create a cheek or chin area if the person in the image is not entirely visible in the image.20Garvie, C, Garbage In, Garbage Out, Georgetown Law Center on Privacy and Technology, 2019, https://www.flawedfacedata.com/. One alarming situation involved police officers, after running a pixelated CCTV still of a suspected shoplifter through an FRT system and it yielding no potential candidates, thinking it would be a good idea to run a search of Hollywood actor Woody Harrelson through the system because they felt the suspect resembled him. They subsequently arrested a man, “they believed was a match – not to Harrelson but to the suspect whose photo had produced no possible hits”.21 Ibid. Elsewhere in the USA, police have been found carrying out FRT searches on sketches – that is, hand-drawn or computer-generated images based on descriptions that eyewitnesses have offered police.22 Ibid. Questions to be considered include:
- On what grounds can such editing be justified?
- Given the significant rights concerns, should police forces be allowed to alter probe images in this way?
- Should there, at the very least, be a legal basis for such editing?
Algorithmic search
This step involves the subsystem that compares a probe photo against a database of images, in the case of retrospective or operator-initiated FRT, or facial images obtained from live video footage against facial images on a watchlist. Considering FRT algorithms are created and developed by private companies, are not generally open to independent audits and/or risk assessments, are shown by research to be biased against anyone who is not a white, middle-aged man2381% of “suspects” flagged by Met’s police facial recognition technology innocent, independent report says, Sky News, 4 July 2019, https://news.sky.com/story/met-polices-facial-recognition-tech-has-81-error-rate-independent-report-says-11755941. See also Fussey, P & Murray, D, Independent Report on the London Metropolitan Police Service’s Trial of Live Facial Recognition Technology, Human Rights Centre, University of Essex, July 2019, https://repository.essex.ac.uk/24946/1/London-Met-Police-Trial-of-Facial-Recognition-Tech-Report-2.pdf. See also Buolamwini, J & Gebru, T, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification”, Proceedings of the 1st Conference on Fairness, Accountability and Transparency, 2018, https://proceedings.mlr.press/v81/buolamwini18a.html. and are each trained using different datasets and likely produce different results depending on their individual algorithms, the following questions should be considered:
- What steps have the police taken to independently audit the veracity of the vendor’s claims about the FRT system/respective algorithm?24Radiya-Dixit, E, A Sociotechnical Audit: Assessing Police Use of Facial Recognition, Cambridge: Minderoo Centre for Technology and Democracy, 1 October 2022, https://www.mctd.ac.uk/wp-content/uploads/2022/10/MCTD-FacialRecognition-Report-WEB-1.pdf
- Have proprietary interests prevented the police from obtaining information about how the algorithm works and the risks it poses?25Lynch, N; Campbell, L; Purshouse, J; Betkier, M, Facial Recognition Technology in New Zealand: Towards a Legal and Ethical Framework, Law Foundation of New Zealand, 30 November 2020, https://doi.org/10.25455/wgtn.17204078.v1.
- Is there a legal mechanism to oblige vendors to publish and/or disclose certain information about their algorithms?
- Prior to any use of FRT, do the police regularly carry out fundamental rights impact assessments and demonstrate that their specific use of FRT does not have a detrimental effect on the rights of the public and/or that it does not have the potential to produce discriminatory effects?
- Are the results of these assessments published?
- What steps have the police taken to mitigate the risks posed to people disproportionately affected by FRT?
- Do the police evaluate and publish the demographic make-up of the training dataset underlying the algorithm, to ensure the dataset is representative of the population where it is to be used?26 Ibid.
- Do the police publish the demographic data of those who are subjected to FRT searches?27 Ibid.
- Do the police publish the demographic data for arrests, stops and searches, and other outcomes resulting from the use of FRT?28 Ibid.
- Do the police have accountability mechanisms in place to address misidentifications when they arise, including a mechanism to notify those affected and offer redress?
- What changes do the police make to their protocols, databases and systems when they become aware that a misidentification has led to a person’s stop and search, arrest, detention or charge?
- Do the police have accountability mechanisms in place to address unfair and unwarranted automated decision making when it arises on account of FRT, including a mechanism to notify those affected and offer redress?
Is the human-in-the-loop “safeguard” really a safeguard?
It is often said by police forces wishing to assuage concerns about FRT misidentification and human rights infringements that there is nothing to be concerned about because there will be a “human-in-the-loop” safeguarding against any automated decisions and there will always be a human reviewing a list of candidates before any further steps are taken, whether the use is live or retrospective. However, it is not always the case that a human – whether a police officer or an eyewitness – will correct an incorrect FRT “match”. Michael Oliver, who has a face tattoo, was wrongfully arrested and detained for almost three days in Detroit after an FRT search returned him as a suspect and an eyewitness picked him out of a photo line-up, all despite the photo of the suspect displaying no face tattoo.29“Faulty Facial Recognition Led to His Arrest – Now He’s Suing”, Vice, September 2020, https://www.vice.com/en/article/bv8k8a/faulty-facial-recognition-led-to-his-arrestnow-hes-suing. In respect of human reviews, the following questions should be considered:
- What role does a human reviewer play in the case of retrospective FRT use?
- What role does a human reviewer play in the case of live FRT use?
- What research has the police carried out to demonstrate the accuracy of their human reviewers?30Radiya-Dixit, E, A Sociotechnical Audit: Assessing Police Use of Facial Recognition, Cambridge: Minderoo Centre for Technology and Democracy, 1 October 2022, https://www.mctd.ac.uk/wp-content/uploads/2022/10/MCTD-FacialRecognition-Report-WEB-1.pdf.
- What specific training have the human reviewers undergone for differing use cases?
- What other information do human reviewers have about a case or individual before or when they are reviewing a candidate?
- Is the human reviewer independent of the investigation into the alleged offence at the centre of the FRT search?
- What steps are taken and/or what safeguards are in place to mitigate against the human reviewer’s own biases?31Such as confirmation bias, which refers to the confirmation of the reviewer’s personal beliefs, and automation bias which implies a tendency to trust the legitimacy of information on the basis that it was produced by technology.
- Do the police have accountability mechanisms in place to address misidentifications after a human review, including a mechanism to notify those affected and offer redress?
- What changes do the police make to their protocols regarding their human reviewer after a misidentification of a person leads to that person being arrested, detained or charged?
- Do “human review” protocols take into consideration, and reflect, the differing rights risks that can stem from live and retrospective use of FRT?
- Is an eyewitness told about the use of FRT by the police?
Is adequate FRT training provided to police officers to mitigate against risks?
Putting a powerful tool such as FRT in the hands of police who are untrained on how to use and understand it, combined with the absence of any independent oversight and assessment of that use, could only serve to further entrench and expand the issues with police use of FRT. It has been reported that, while the US Federal Bureau of Investigation (FBI) has carried out tens of thousands of FRT searches over recent years, just 5 percent of its 200 agents who use the technology have taken the FBI’s own course on how to use it.32Johnson, K, FBI Agents Are Using Face Recognition Without Proper Training, Wired, September 2023, https://www.wired.com/story/fbi-agents-face-recognition-without-proper-training/. It is unclear what training takes place in other states where FRT is used by police. In respect of oversight, in the UK there is a Surveillance Camera Commissioner, while in the USA there have been calls for a regulatory office to oversee the management and regulation of complex technologies such as FRT, similar to how the pharmaceutical industry is regulated,33Learned-Miller, E, Ordóñez, V, Morgenstern, J, and Buolamwini, J, Facial Recognition Technologies in the Wild: A Call for A Federal Office, 29 May 2020, https://assets.website-files.com/5e027ca188c99e3515b404b7/5ed1145952bc185203f3d009_FRTsFederalOfficeMay2020.pdf. and/or an independent body charged with certifying policing technologies before they are deployed.34Friedman, B, Heydari, F, Isaacs, M & Kinsey, K, “Policing Police Tech: A Soft Law Solution”, Berkeley Technology Law Journal, Vol. 37, 2022, Available at SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4095484 Questions to be considered include:
- What would an effective rights-based training course on police use of FRT entail?
- Should a police member receive a certificate or accreditation before being allowed to use FRT?
- Would any such training be enough to mitigate human rights concerns?
- Should details of this training be made public?
- What would an effective oversight body/mechanism look like?
- What powers should such a body have to make it effective and accountable?
- Would having an oversight body or mechanism be enough to mitigate concerns?
- Is there a danger that calls for training and oversight mechanisms ultimately serve to green-light FRT use by police and act as a tick-box exercise?
Are the disproportionately impacted communities consulted adequately?
A significant issue concerning the use of FRT by police is that the very communities disproportionately affected by error-prone technology are not consulted in a transparent manner about the tech, how it works and how it impacts the criminal justice system and people’s lives and fundamental rights. Questions to be considered include:
- Should consultation with members of the public and members of communities disproportionately affected by FRT be mandated?
- How could that consultation be effective?
- Should that mandated consultation include details from the police to the public about the specific technology used, how it is used, how the different subsystems and algorithms work, details of the datasets it was trained on and the production of a data protection impact assessment and fundamental rights impact assessment?
- Should that consultation include details of how probe images are chosen, how reference databases are created and how watchlists are created?
- Should that consultation take place every time the police change their policies or specific technology use?
Are people informed that they are being subjected to FRT?
Another significant transparency issue occurs when people are simply not aware that FRT is being used in their environment. For example, in the UK, when live FRT is being used the police are supposed to alert the public to its use. However, this often happens on social media, which is not a sufficient way of alerting the public considering many people are not on certain social media sites and those who are may not have seen the respective post. Police in the UK are also supposed to mark out the physical area where live FRT is being used so the public can avoid the area should they not wish for their biometric data to be processed. However, signs are usually placed too close to the area so that it is often too late or too cumbersome to avoid the area. Questions to be considered include:
- Data protection law in the EU provides that individuals have a right to know about the processing of their personal data. For CCTV this means that the data controller of the CCTV must have signs erected to indicate the use of CCTV and those signs must include at least the following information:
- The identity and contact details of the data controller;
- The contact details for the data protection officer, if one has been appointed;
- The purposes for which data is processed;
- The purpose and legal basis for the processing;
- Any third parties to whom data may be disclosed;
- The security arrangements for the CCTV footage;
- The retention period for CCTV footage; and
- The existence of data subject rights and the right to lodge a complaint with the local data protection authority.
- Could such signage go some way to mitigate against the transparency issue here?
- How effective, and widely applied, is CCTV signage policy already?
Has the relevant information about the system been disclosed to those accused based on FRT?
A third transparency issue emerging in the USA is that when FRT is used in investigations leading to someone’s arrest and the defendant finds themself before the courts, their defence teams are denied access to any information about how that system worked, its propensity for error or bias and even the name of the system itself.35New Jersey Appellate Division One of First Courts in Country to Rule on Constitutional Rights Related to Facial Recognition Technologies, ACLU, June 2023, https://www.aclu-nj.org/en/press-releases/new-jersey-appellate-division-one-first-courts-country-rule-constitutional-rights. But, in a significant win for transparency, in June 2023, in one of the first cases of its kind, the Appellate Division of the Superior Court of New Jersey, while noting that FRT is “novel and untested”, ruled in State of New Jersey vs. Francisco Arteaga36 Superior Court of New Jersey Appellate Division Docket No. A-3078-21 State of New Jersey vs. Francisco Arteaga, decided 7 June 2023, https://law.justia.com/cases/new-jersey/appellate-division-published/2023/a-3078-21.html. that the following, as sought by the defence counsel, had to be disclosed to the defendant:
- The name and manufacturer of the facial recognition software used to conduct the search in this case, and the algorithm(s), version number(s) and year(s) developed;
- The source code for the face recognition algorithm(s);
- A list of what measurements, nodal points or other unique identifying marks are used by the system in creating facial feature vectors including, if those marks are weighted differently, the scores given to each respective mark;
- The error rates for the facial recognition system used, including false accept and false reject rates (also called false positive or match and false negative or non-match rates), as well as documentation on how the error rates were calculated, including whether they reflect test or operational conditions;
- The performance of the algorithm(s) used on applicable National Institute of Standards in Technology Face Recognition Vendor Tests, if available;
- The original copy of the query or “probe” photo submitted;
- All edited copies of the query or “probe” photo submitted to the facial recognition system, noting, if applicable, which edited copy produced the candidate list that the defendant was in, and a list of edits, filters or any other modifications made to that photo;
- A copy of the database photo matched to the query or “probe” photo and the percentage of the match, rank number or confidence score assigned to the photo by the facial recognition system in the candidate list;
- A list or description of the rank number or confidence scores produced by the system, including the scale on which the system is based (e.g. percentage, logarithmic, other);
- A copy of the complete candidate list returned by the face recognition or the first 20 candidates in the candidate list if longer than 20, in rank order and including the percentage of the match or confidence score assigned to each photo by the facial recognition system;
- A list of the parameters of the database used, including:
- How many photos are in the database;
- How the photos were obtained;
- How long the photos are stored;
- How often the database is purged;
- What the process is for getting removed from the database;
- Who has access to the database;
- How the database is maintained; and
- The privacy policy for the database;
- The report produced by the analyst or technician who ran the facial recognition software, including any notes made about the possible match relative to any other individuals on the candidate list; and
- The name and training, certifications or qualifications of the analyst who ran the facial recognition search query.
The above list of items is a useful compilation of details to be considered when we seek transparency around the use of FRT by police. Similar details must be disclosed regarding the live use of FRT and, specifically, the parameters of how, why and when a watchlist was created and how a person came to be included.