SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers
Story image
Review prompts police to halt plans to use facial recognition technology
Mon, 13th Dec 2021
FYI, this story is more than a year old

Police are pressing pause for now on using facial recognition technology to identify people off live camera feeds, but will still use it on stored footage.

They have spent multi-millions increasing their high-tech powers in the last two years, but have agreed with the first-ever independent review of their use of facial recognition that they should now take a breath.

"Police will not use live automated FRT [facial recognition technology] until the impact from a security, privacy, legal, and ethical perspective is fully understood," said deputy chief executive Mark Evans.

The police-commissioned review by two leading critics of this country's lax laws around digital surveillance, Nessa Lynch and Andrew Chen, found "no evidence" police had been using the technology live.

But the pair warned it would impact Māori the most if they do, and perhaps constitute an unwarranted search in a public place - and police should consult lawyers before taking that path.

"Monitoring of protests or community events with live automated FRT [as has happened in the UK] could have a chilling effect on rights to freedom of expression and peaceful assembly," Lynch and Chen write.

The immediacy and scope - including scanning passersby - of live use invited inaccuracy and bias.

"Multiple interviewees noted that police likely did not have social licence or consent to use live FRT, with some indicating concern that backlash to live FRT could lead to a loss of social licence for police use of CCTV feeds in general."

'Cameras will surpass 1 billion'

Law enforcement in many countries is increasingly using facial recognition as, unlike other biometric surveillance such as of irises or fingerprints, it is easy and tempting to do at a distance - and without people necessarily knowing.

The tech providers are encouraging their police customers, as the video surveillance market surges to revenues of $US24 billion this year.

Briefcam, a tech provider that New Zealand police use to scan non-live camera footage, cites industry estimates that " the number of installed cameras will surpass 1 billion".

The data could be overwhelming, Briefcam said, so it was offering "deep learning" software that goes through video to "identify, categorise and index the objects ... (such as clothing, bags, vehicles, animals, and other items)".

The sources of facial recognition images are myriad - and growing.

Police here already collect people's photos while on the beat using an app called OnDuty - most controversially, on young Māori stopped in the street, as an RNZ investigation revealed.

The reviewers in today's report said there was "ambiguity" about what could be done with such photos, and police knew they needed "greater clarity".

Some of these photos don't yet make it into databases that can be searched by facial recognition, but the reviewers warned: "Police should be aware that merging image databases together in the future could expose more images to the FRT capabilities."

They said photos on drivers' licences and passports, and harvested off social media, could be merged this way - and that would require more rules to mitigate "privacy and misuse risks".

It was especially "high risk and very problematic" if the police's online Open source intelligence (OSINT) team that searches social media, adopted facial recognition.

Police have shown reluctance to detail what social media search tools they use.

Another provider of biometrics in New Zealand, for passport processing, Daon, in an industry webinar said the deep-learning algorithms demand a lot of data - and are getting it from the "huge biometric datasets available now" arising from the "widespread capture and sharing" of data, off people's phones, in particular.

The computational power that was training biometric algorithms had "improved massively", Daon said.

Despite these rapid technological advances, Lynch and Chen warned the police there was "a very limited current evidence base for the efficacy and cost benefit of live automated FRT in policing".

Any proposal to use more of it must "identify a clear problem to be solved".

Facial recognition faces fewer barriers when used to analyse non-live feeds, since police typically are looking for known suspects, rather than on a more general basis.

"This technology will continue to be used by specialist and trained teams," police said.

Today's report shows police are imposing boundaries on themselves, such as limiting the results from doing face-matching searches to, say, the top 20 matches, "to prevent 'fishing' for data and to mitigate privacy impacts for immaterial people appearing in results".

The police face other limits due to the data for FR being held in patchy and siloed systems, where people's photos are of "vastly varying age and quality".

Even so, the reviewers are recommending more and stronger rules, such as around how long the images of people in police databases are kept, especially if they faced low-level charges or investigations came to nothing.

Police have adopted all 10 of the reviewers' recommendations.

"It is critical that we continue to use technology safely and responsibly, as accuracy and bias are key concerns for FRT," Evans said in a long statement.

The police are making a virtue of being very public now, and inviting external scrutiny of their high-tech goals, after attracting criticism for their earlier closed-doors approach that included not doing a privacy impact assessment on a $23 million FR-capable upgrade to their ABIS2 image handling software, and secretly testing a controversial internet-searching FR tool from Clearview AI.

Facial recognition information can still be hard to come by: When RNZ asked the Department of Internal Affairs to release four security audits of the FR passport systems it is using, arguing the public is being asked to trust in these systems so should be told how secure they are, the reports were eventually released almost entirely blanked out.

The Ombudsman backed the DIA for not releasing more, saying the reports covered cyber-attack testing and vulnerabilities.