SecurityBrief New Zealand - Technology news for CISOs & cybersecurity decision-makers
Story image
Deepfakes the 'next wave of concern' - but can law really stomp it out?
Tue, 21st May 2019
FYI, this story is more than a year old

The dishonest and sometimes outright manipulative world of ‘deepfakes' should be carefully considered before any government attempt to deal with the problem is formulated, according to a new Law Foundation-backed study.

Released today, the Perception Inception report looks at how photographs and videos are subject to 21st century manipulation, the origins of which go back decades since the birth of photography.

But now, technology has advanced to the point in which images, sounds, and videos can be manipulated to the point of looking genuine. What's more, the fakery can often be almost impossible to detect.

“These representations can make it look and sound like something happened when it did not, or that it happened differently than it did,” the report says.

With the government's Christchurch Call - an attempt to curb online hate and the sharing of harmful content online - deepfakes and other synthetic media will the next wave of concern.

“At least 16 current acts and formal guidelines touch on the potential harms of synthetic media. Before calling for new law or new regulators, let's work out what we've already got, and why existing law is or isn't working,” says the report's co-author, Tom Barraclough.

Those Acts include the Privacy Act 1993; the New Zealand Bill of Rights Act 1990; the Films, Videos and Publications Classification Act 1993; the Copyright Act 1994; the Human Rights Act 1993; and many others.

The authors acknowledge that New Zealand is not immune to the harmful effects that deepfakes and synthetic media can bring; and that New Zealand could lead in the development of law and policy that balances innovation and criticality.

“Nevertheless, this does not necessarily mean that New Zealand needs new law, although we do not rule this out," the report says.

“Enforcing the existing law will be difficult enough, and it is not clear that any new law would be able to do better. Overseas attempts to draft law for deepfakes have been seriously criticised,” adds Barraclough.

So if other countries' attempts to tackle the problem are flawed, what's the answer for New Zealand? The first step, the researchers say, is to agree on specifics – but that can't be done without first analysing the status quo.

“Calling for a kind of social media regulator is fine, but these suggestions need substance. What standards will the regulator apply? Would existing agencies do a better job? What does it mean specifically to say a company has a duty of care? The law has to give any Court or regulator some guidance.

“Further, we must ask what private companies can do that governments can't. We have to consider access to justice: often a quick solution is more important than a perfect one. Social media companies can restrict content in ways that Governments can't - is that something we should encourage or restrict?

The researchers point out that fake video is not inherently bad: synthetic media technologies are a key strength of New Zealand's creative industries, and these should not be stifled. But there are many harmful uses that do need to be curtailed, including the creation of non-consensual pornography and using synthetic speech to produce false recordings of public figures.

“‘Fakeness' is a slippery concept and can be hard to define, making regulation and automatic detection of fake media very difficult,” says Barraclough.

The report adds that disinformation, misinformation, and ‘fake news' have connections to the world of synthetic media.

“Some have said that synthetic media technologies mean that seeing is no longer believing, or that reality itself is under threat. We don't agree with that. Harms can come from both too much scepticism as well as not enough.

“None of this is to say that no action whatsoever should be taken,” the report concludes.

“It is completely legitimate to call for regulatory intervention. But the merits of any course of action cannot be assessed without specifics. What exactly is being proposed? In the case of harmful synthetic media, even if we all agreed we should ban it or regulate it, how could we realistically do that? What exactly are we looking to prevent?