News

British intelligence recycles old argument for borking encryption: think of the children!

British intelligence recycles old argument for borking encryption: think of the children!
Written by Techbot

Comment Two notorious characters from the British security services have published a paper that once again suggests breaking end-to-end encryption would be a good thing for society. 

Nearly four years ago Ian Levy, technical director of the UK National Cyber Security Centre, along with technical director for cryptanalysis at the British spy agency GCHQ Crispin Robinson, published a paper arguing for “virtual crocodile clips” on encrypted communications that could be used to keep us all safe from harm. On Thursday they gave it another shot [PDF], with a new paper pushing a very similar argument, while acknowledging its failings.

“This paper is not a rigorous security analysis, but seeks to show that there exist today ways of countering much of the online child sexual abuse harms, but also to show the scope and scale of the work that remains to be done in this area,” they write.

“We have not identified any techniques that are likely to provide as accurate detection of child sexual abuse material as scanning of content, and whilst the privacy considerations that this type of technology raises must not be disregarded, we have presented arguments that suggest that it should be possible to deploy in configurations that mitigate many of the more serious privacy concerns.” 

The somewhat dynamic duo argues that to protect against child sexual abuse and the material it produces it’s in everyone’s interests if law enforcement has access to private communications. The same argument has been used many times before, usually against one of the Four Horsemen of the Infocalypse: terrorists, drug dealers, child sexual abuse material (CSAM), and organized crime.

The plan is to restart attempts at “client-side scanning” but with service providers – who are ostensibly offering encrypted communications – asked to insert themselves in the process to check that CSAM isn’t being sent around online. Law enforcement could then work with these companies to crack down on the CSAM scourge.

Apple infamously tried to make the same argument to its users last year before backing down. It turns out promising privacy and then admitting you’re going to be scanning users’ files isn’t a popular selling point.

Apple can’t solve it, neither can we

In their latest paper Levy and Robinson argue that this isn’t a major issue, since non-governmental organizations could be used to moderate the scanning of personal information. This would avoid the potential abuse of such a scheme, they argue, and only the guilty would have something to fear.

It’s not a new argument, and has been used again and again in the conflict between encryption advocates who like private conversations and governments that don’t. Technology experts mostly agree such a system can’t be insulated from abuse – backdoors can always be found, after all. Governments would prefer to think otherwise, but the paper does at least acknowledge that people seeking privacy aren’t suspects.

“We acknowledge that for some users in some circumstances, anonymity is, in and of itself, a safety feature,” Levy and Robinson opine. “We do not seek to suggest that anonymity on commodity services is inherently bad, but it has an effect on the child sexual abuse problem.” 

Which is a bit like saying conversations can be used to plan crimes so they too should be monitored. No one’s denying the incredible harm that stems from the scum who make CSAM, but allowing monitoring of all private communications – albeit by a third party – seems a very high price to pay.

Apple backed down on its plans to scan users’ files for such material in part because it has built its marketing model around selling privacy as a service to customers – although this offer does not apply in China. Therein lies the point – if Apple is willing to let Middle Kingdom mandarins access data, there’s no guarantee that it won’t do the same for others if it’s in the corporate interest.

That scheme saw the idea of searching for images using the NeuralHash machine-learning model to identify CSAM – a model the authors say “should be reasonably simple to engineer.” The problem is that the same technology could also be used to identify other images – such as pictures mocking political leaders or expressing a viewpoint someone wanted to monitor.

Levy and Robinson think this is a fixable problem. More research is needed into verifying age, they suggest – something the UK is wrestling with at the moment. Also, human moderators should be involved before the information on suspected images is passed on to law enforcement.

Not my problem

Interestingly, the two make the point repeatedly that this is going to be the service providers’ responsibility to manage. While making the point that the paper is not official government doctrine, it’s clear Her Majesty’s Government has no intention of picking up the tab for this project, nor overseeing its operation.

“These safety systems will be implemented by the service owner in their app, SDK or browser-based access,” they say. “In that case, the software is of the same standard as the provider’s app code, managed by the same teams with the same security input.” 

And allowing private companies to access user data with government approval has always worked so well in the past. This is an old, old argument – as old as encryption itself.

We saw it first crop up in the 1970s when Whitfield Diffie and Martin Hellman published on public-key encryption (something GCHQ had developed independently years before.) Such systems were labelled munitions, and their use and export severely limited – PGP creator Phil Zimmerman suffered three years of investigations in the 1990s over trying to allow private conversations.

As recently as 2019, someone at the US Department of Justice slipped the leash and suggested they didn’t want a backdoor, but a front one – again using the CSAM argument. Some things never change. ®

Read At Source

About the author

Techbot