Last week’s global cyberattack which brought large swathes of the NHS to its knees raises some extremely serious questions about the recently passed Investigatory Powers Act (IP Act), in particular the provision for the Government to issue Technical Capability Notices.
Technical Capability Notices (TCN) are a small but critical part of the IP Act and one which we consistently highlighted with MPs and Peers as being hugely problematic during the Parliamentary debates before the Bill became law.
In brief a TCN gives the Government the power to order a telecommunications company or service who have over 10,000 “customers” to “provide and maintain” a variety of ways for the police and the intelligence agencies to access, monitor and decrypt the communications of customers.
In practice this can mean companies and services such as your phone provider, postal service, social media, email, Wi-Fi or over the top service provider may be required to build systems, create weaknesses or “backdoors” into their existing systems or be required to decrypt encrypted communications if they are issued with a TCN.
On the face of it you can see why Government, the intelligence agencies and the police want these powers. They are, we are told, necessary and proportionate in the battle to keep the country and its citizens safe from terrorism, serious crime and child sex exploitation.
Most of us when faced with those threats and fears will happily agree to such intrusion if we think it will keep us safe. But what if this method of keeping us safe from those threats makes us less safe? What if the requirement for a company to maintain a weakness, build a backdoor or break encryption exposes each and every one of us to cybercrime or cyberwar? It may sound ludicrous, but last week’s international cyberattack showed that these concerns are no longer what ifs but realities.
The WannaCry infection which hit unpatched Microsoft systems around the world, including within the NHS, had its roots in the National Security Agency (NSA).
A while back the NSA; America’s version of our GCHQ, discovered a range of weaknesses in Microsoft software. Rather than do the responsible thing and tell the company, enabling them to create patches to protect their customers, the NSA kept quiet. Not only did they keep quiet but they set about using the vulnerabilities as a way to hack the devices of foreign governments, companies and organisations in order to snoop on them.
Whilst exploiting Microsoft’s vulnerabilities, the NSA didn’t think to ensure their systems were unhackable. Whatever weakness existed in their system was subsequently exploited by hacking group The Shadow Brokers who published their findings on the internet in April 2017.
It took the NSA 8 months to realise they had been hacked. Microsoft eventually discovered for themselves the vulnerabilities in their systems and subsequently released patches but people even when warned can be slow to update systems or apply patches no matter how urgent or damaging the problem might be.
And that is where the opportunities for exploitation and the problem for all our security lie.
Last week’s incident is not going to be a one off. The chaos, the fear, the threat to personal data, in fact the threat to people’s lives because of a malware attack are all likely to happen again, because in a connected world, cyber is the new target for crime, terrorism and warfare.
As Brad Smith the President and Chief Legal Officer of Microsoft said in a blog earlier this week: “The governments of the world should treat this attack as wake up call. They need to take a different approach and adhere in cyberspace to the same rules applied to weapons in the physical world. We need governments to consider the damage to civilians that comes from hoarding these vulnerabilities and the use of these exploits.”
And yet the IP Act and the use of TCNs as a method of keeping the country safe appear to be permitting the opposite. The requirement for a company to build in a weakness or not patch vulnerability on the order of the Government creates insecure systems which can be exploited by those with malicious intent. There is no such thing as a weakness which can only be used for good.
We have long pointed out that lawful hacking, backdoors and decryption might be seen to be a solution to spotting malicious intent where it is communicated by terrorists, criminals or paedophiles, but in the world of cyber security, the threats to our security come in a range of forms which do not rely on communication or seeing what someone has searched for online.
The NSA thought their actions would not be spotted. The UK Government seem to think that any weakness they create they will control and no one need fear their activity. A TCN on the other hand places all the risk at the feet of the company or service provider, if a problem occurs with a backdoor, system, maintained weakness or off the back of the decryption of data, they will have to take the blame even though they were under orders from the Government. That of course is if the problem is ever traced back, because TCN’s are secret and no-one issued with one is allowed to reveal they are the subject of one.
If the plan to protect us from terrorism, serious crime and paedophiles is to demand the ability to hack or decrypt then the security we will be given with one hand will create insecurity with the other.
No matter how much cyber security investment is promised by Government, no matter how well the National Cyber Security Centre (NCSC) establish a world leading cyber security strategy, the reality is all our security is at risk when vulnerabilities are lawfully permitted, because no vulnerability belongs just to the good, they will be exploited by the bad.
Hopefully last week’s malware attack will be as Brad Smith suggests “a wake-up call” but with the obligations on TCN’s to be passed as a statutory instrument as soon as Parliament returns from the election there is little hope that the Government, whomever it will be, will react quickly enough.