UK could force E2E encrypted platforms to do CSAM-scanning – ProWellTech
The UK government has tabled an amendment to the Online Safety Bill that could put it on a collision course with end-to-end encryption.
It’s proposing to give the incoming Internet regulator, Ofcom, new powers to force messaging platforms and other types of online services to implement content-scanning technologies, even if their platform is strongly encrypted — meaning the service/company itself does not hold keys to decrypt and access user-generated content in the clear.
The home secretary, Priti Patel, said today that the governments wants the bill to have greater powers to tackle child sexual abuse.
“Child sexual abuse is a sickening crime. We must all work to ensure criminals are not allowed to run rampant online and technology companies must play their part and take responsibility for keeping our children safe,” she said in a statement — which also offers the (unsubstantiated) claim that: “Privacy and security are not mutually exclusive — we need both, and we can have both and that is what this amendment delivers.”
The proposed amendment is also being targeted at terrorism content — with the tabled clause referring to: “Notices to deal with terrorism content or CSEA [child sexual exploitation & abuse] content (or both)”.
These notices would allow Ofcom to order a regulated service to use “accredited” technology to identify CSEA or terrorism content which is being publicly shared on their platform and “swiftly” remove it.
But the proposed amendment goes further — also allowing Ofcom to mandate that regulated services use accredited technical means to prevent users from encountering these types of (illegal) content — whether it’s being shared publicly or privately via the service, raising questions over what the power might mean for E2E encryption.
The UK government has — for years — been calling on tech giants like Facebook not to expand their use of E2E encryption, arguing it would impede the fight against CSEA by making it harder for law enforcement to detect offences. While the tech industry has pushed back, pointing out that E2E encryption is the security gold standard and measures to limit its use risk weakening online security for everyone.
This deadlocked debate has, more recently, led to the Home Office to shift tack to suggest a third way forward — by arguing that scanning technologies can be developed for use on E2E encrypted platforms without compromising robust security or privacy.
Last year it announced a plan to put some money where its mouth is: Saying it would be splashing a little taxpayer cash on a “Safety Tech” competition to fund companies to develop prototype technologies for detecting CSEA on E2E encrypted services — claiming this was possible “without compromising user privacy”.
However security and privacy experts remain sceptical that technology can thread this political needle — voicing robust opposition to other proposals to inject some form of on-device scanning into E2E encrypted services, for example, such as the controversial CSAM (child sexual abuse material) detection tool Apple announced last year — when it similarly claimed the safety-focused tech would safeguard privacy. (Until, in the face of sustained opposition, Apple canned the rollout; and, by year’s end, quietly dropped reference to it.)
Prior to Apple going cold on its home-made on-device scanning tech, Patel had spoken warmly of the proposal. But if companies won’t voluntarily develop and implement scanning tech to circumvent E2E encryption, the UK government appears to be planning to force their hand — through legislation.
In the tabled amendment, it’s proposed that a regulated service could be required to use what the clause dubs “best endeavours” to develop or source technology for the aforementioned purposes (i.e. detecting and removing CSEA and/or terrorism content).
That suggests that the choice facing platforms operating in the market in the near future could be either develop your own scanning tech — or be made to pay for “accredited” tech developed by a third party.
The five winning projects of the Home Office’s CSEA detection “Safety Tech” competition, meanwhile, were announced last November — and those involved potentially stand to gain a much bigger payday if this amendment stands, creating a market for their prototype services.
Companies contributing to the winning projects include parental control app maker SafeToNet; digital forensics firm Cyan Forensics; real-time risk intelligence focused Crisp Thinking; enterprise security firm GalaxKey; content moderation software maker Image Analyser; age assurance tech firm Yoti; content moderation startup DragonflAI; and digital forensics firm T3K-Forensics.
Projects winners were given five months to come up with prototypes the government said would be evaluated by an external assessor.
But, over seven months later, it’s not clear how the prototypes have faired.
Safety tech evaluation
A multi-disciplinary research group involving a number of UK universities, called Rephrain, was chosen by the government to carry out the (technical) evaluation. It released a scoping document about the work in March — when it also announced a month-long public consultation.
A note on Rephrain’s website at the time acknowledged “tensions” between the safety aims and user privacy — and committed the organisation to publicly publishing its evaluation. “Given the tensions that arise between protecting vulnerable users and protecting user privacy at large, the key steps in Rephrain’s evaluation process are (1) to seek input from the community and (2) to publicly publish all results, whilst ensuring that academic rigour and objectivity remain the core of our work, and to inform future directions in this area,” it wrote.
At the time of writing, it’s not clear whether Rephrain’s evaluation has been completed but we were unable to locate a final report on its website.
We’ve also reached out to the group — and the government — with questions about the results of the Safety Tech challenge and will update our reporting if we get a response.
If Rephrain’s evaluation remains ongoing — or, indeed, if the prototypes are found to not be effective at only detecting illegal content — it would raise questions about why the government is rushing ahead with a law mandating the use of “accredited” technologies that don’t exist yet. (And which some security and privacy experts would likely argue may never exist.)
The Home Office’s press release only talks generally about its Safety Tech initiative “demonstrating” (note the active present tense) that it’s possible to detect CSAM on E2E platforms “while respecting user privacy” — but there’s no word on any specific prototypes or how well or poorly these novel technologies performed (e.g. the proportion of false positives flagged).
The government does note that two of the five winners received additional “stretch” funding — suggesting their prototypes may have been considered more promising than the others. But its PR does not even specify which companies unlocked that extra taxpayer cash — so there is a concerning lack of detail accompanying this legislative push for a ‘technosolutionist’ fix.
Nor is it clear which body — public or private, existing or yet-to-be-formed — would be responsible for ‘accrediting’ the tech in question. (We’ve also asked about that.)
One more thing to note is that, in recent years, UK spy agencies have been pressing for a tech-enabled ‘third way’ to be found to deal with E2E encryption — such as GCHQ’s ‘ghost protocol’ proposal for intelligence agencies to be invisible CC’d on encrypted chats (an idea which was roundly condemned by security experts and the tech industry as posing a risk to user trust and security) — so it’s interesting to consider what other forces may be driving the Home Office proposal for “accredited” technologies to enforce scanning of E2E encrypted content.
It’s true that child safety campaigners have also expressed concern that the Online Safety Bill, as originally drafted, won’t deliver the sought for purge of CSEA content. And campaigners have been pressing for tougher measures to address concerns around E2E encrypted platforms.
In a statement accompanying the Home Office’s announcement of the amendment, Peter Wanless, NSPCC chief executive, said the proposed measure will “strengthen protections around private messaging and ensure companies have a responsibility to build products with child safety in mind”. He also dubs the Online Safety Bill a “once-in-a-generation opportunity to ensure children can explore the online world safely”.
“This positive step shows there doesn’t have to be a trade-off between privacy and detecting and disrupting child abuse material and grooming,” he adds — but also without offering any detail to back up the assertion.
It’s true the bill has faced criticism from lawmakers in parliament for, among other things, lacking robust enough powers to address the harms the government says it wants the legislation to target — giving ministers impetus to press for further powers for Ofcom.
Parliamentarians have also previously urged a more comprehensive approach to tackling illegal content.
There has also generally been broad, cross-party support for the government to introduce online safety legislation (despite huge controversy from civil society and digital rights groups, and from security and privacy experts). Although the parliamentary scrutiny process has also flagged — and echoed civil society — concerns about the impact on freedom of expression.
The draft legislation, which was published back in May 2021, was only introduced to parliament this March — but remains a stated priority for the government. Although, in recent months, the digital secretary, Nadine Dorries, has released a flow of revisions and additional measures to “strengthen” its provisions — including expanding its scope to cover scam ads and cyberflashing; speeding up the application of criminal liability for non-compliant senior execs; and setting out requirements for platforms to tackle anonymous trolling, among other expansions.
Just this week there was confirmation of another addition: As the government said it would be making “foreign interference” — like disinformation — a priority offence under the bill, piling even more requirements for tech firms to conduct complex assessments in order to remove contravening content…
The upshot of all this is a ballooning of the scope of a piece of legislation that critics considered unworkably unwieldy from the start.
Some also point to the steady flow of revisions as a sign the entire project is pure, hubristic overreach and doomed to fail — but not without causing untold damage to UK citizens’ rights and freedoms in the process; and to UK startups and businesses which will face crippling costs trying to comply with such a conflicting mesh of requirements.
Although — clearly — the UK’s domestic ‘safety tech’ sector is set to cash in.
Nor will there be a huge amount of time for UK tech businesses to figure out how to (try to) comply with all the provisions — especially when it comes to tackling illegal content. (So more gravy for ‘safety tech’ firms — who will be selling ‘compliance’ for a fee.)
The bill provides for a two-month implementation period for the regulator’s powers to apply and — also today — Ofcom urged companies to prepare for the incoming legislation, saying it expects the rules to be passed by early 2023 “at the latest”, and warning it intends to hit the ground running with a “100-day plan” to get the regime up and running.
There will be a bit more leeway on the full compliance timeline, though, with Ofcom saying it’s giving companies until mid 2024 to get all their ducks in order.
“Within the first 100 days of our powers taking effect, Ofcom will focus on getting the ‘first phase’ of the new regulation up and running — protecting users from illegal content harms, including child sexual exploitation and abuse, and terrorist content,” the regulator wrote, saying it will set out a draft Code of Practice on illegal content harms “explaining how services can comply with their duties to tackle them”; and draft guidance on “how we expect services to assess the risk of individuals coming across illegal content on their services and associated harms” .
“Within three months, companies must have completed their risk assessments related to illegal content, and be ready to comply with their duties in this area from mid-2024 once the Code of Practice has been laid in Parliament,” it added.
UK vs EU
While the UK’s approach to online content regulation looks distinctive, to put it politely — given such a lumpy, pot-luck conglomerate of provisions — the country is not alone in trying to push a technology-focused fix to address concerns about the rise of CSEA online.
Back in May, European Union lawmakers presented their own plan for tackling online child sexual abuse — which, similarly, emphasises the use of novel technologies to automate detection and removal of this type of illegal material. The EU plan even goes further — in wanting to require that platforms also identify and prevent ‘grooming’ (i.e. online activity that could lead to CSEA), not just purge the material itself.
The bloc’s proposal has generated major controversy in a region with a comprehensive data protection framework — and where privacy is a fundamental right that’s listed in the EU’s charter.
So it remains to be seen how the bloc’s co-legislative process could reshape the suggested law.
But — by way of comparison — it’s worth noting that the equivalent ‘accredited technologies’ in the EU proposal are envisaged being developed by a new, dedicated EU body that would be set up for preventing and detecting CSEA (aka the EU Centre) — but with the process of accreditation involving the European Data Protection Board, an existing steering and oversight body for EU data protection regulation.
So, essentially, if the EDPB judges a novel scanning tech to pose a risk to citizens’ privacy it’s duty bound not to sign off on use of it at all.
Which means the EU proposal contains at least one major check & balance on ‘technosolutionism’ trampling citizens’ rights and/or fatally compromising E2E encryption that the UK equivalent can’t — since the country is no longer an EU member, and UK domestic data protection regulation now sits outside the bloc.
Albeit, the EU has also said it’s working with industry and academia to “support research that identifies technical solutions to scale up and feasibly and lawfully be implemented by companies to detect child sexual abuse in end-to-end encrypted electronic communications in full respect of fundamental rights” — so its lawmakers are refusing to give up on ‘novel tech’ altogether. But it’s fair to say that EU law sets a high bar for being able to apply surveillance technologies in a disproportionately broad way.
The UK, however, is in the midst of ‘reforming’ domestic data protection legislation — as it flirts with post-brexit deregulation — and these in-train reforms could result in more powers being handed to the secretary of state to set the priorities for the UK’s domestic data protection watchdog. So it’s not hard to see how a UK political push to promote (domestic) ‘safety tech’ could, inexorably, lead to reductions in privacy for UK citizens — as the government can simply chooser to reconfigure UK regulatory activity (and, indeed, change entire laws) to fit its political aims.
That, dear reader, is what brexit means.
With the UK outside the EU — and with ministers also now busy reframing the country’s relationship with European human rights law (via the proposed UK ‘bill of rights’), the ability for an overarching, rights-based legal order to enforce robust checks and balances on ‘accredited’ technosolutionism applied through domestic UK law is being reduced.
So while there may be some surface similarities in approach here, between what the UK government wants to do with this amendment to the Online Safety Bill and with the EU’s so-called ‘chat control’ plan, since both are eyeing CSEA scanning tech, the legal contexts are already distinct and continuing to diverge further — or, well, at least that’s the case for as long as Boris Johnson’s government feels empowered to tear up the old rule books, do away with ethical standards and redraw the boundaries of citizens’ rights how it sees fit.