The Internet of Things looms large. The analysts at Gartner estimate that in 2015, there will be 4.9 billion connected “things” and by 2020 it will be 25 billion connected things. These connected devices will do everything from regulating the heat in our homes to regulating huge electrical utilities. They will run our factories and our industries – to some extent they already do.
The concern? IoT devices in industry today fall into two camps. Some are legacy devices that have been in place for years. Others are part of a new order of hardware devices that are so powerful but so cheap they are almost disposable. But on both ends of the spectrum they have one thing in common – most of the devices are inherently insecure. For many, this brilliant future – this brave new world – is built on a house of cards.
In a world where the U.S. Department of Defence reports that intrusions into critical infrastructure have increased by more than 17 times, you would think that our dependency on these devices would raise real concerns. It does. At least among security professionals, like those who attended the IQPC Cyber Security Forum in Calgary over the past two days. ITWC is a media sponsor, and I attended to represent CSO Digital – our security publication. But what we learned is that the lack of security in the Internet of Things may be known to professionals, but it has not really been truly appreciated by executives, boards of directors and the general public.
How do you make this real? One security consultant took matters into his own hands. With his own investment of a few hundred dollars, Nicolas McKerrall, a security engineer with Check Point Software created a model of the networks of devices that are in current use in everything from manufacturing to nuclear power plants. As McKerrall pointed out, this was his second version. The first looked so much like a bomb that he had trouble flying with it. The new version, with its plywood back, is clearly designed to look more like a science project. But it is, as McKerrall says, a very real replica of what is in industry today.
His presentation showed not only how easy it was to infiltrate a network – he illustrated just how easily one could issue commands to the myriad of devices in an industrial control network. The video that accompanies this article features McKerrall showing just how easily these networks of devices can be manipulated.
But how to gain access? McKerrall kept the audience riveted as he went through a scenario, step by step, showing real data with the names obscured to protect the gullible.
The bottom line, says McKerrall, “you don’t need to dumpster dive or use social engineering. You can get all the information you need from LinkedIn.”
Here’s just one scenario that he put forward.
Step one – create a LinkedIn profile of a fictitious job candidate that wants to apply to the targeted company. Make it realistic and interesting. Who would link with a fictional professional? The answer is “almost everyone.” In fact, as I’ve written for ITBusiness.ca before, even a relatively obvious fake profile will get a link from many people.
Step two – identify an engineer in the company and find out what the name of the devices that the company is using. How? McKerrall showed us the public profile of several engineers where the devices that the company was using were bragged about in their qualifications.
Step three – Google the device and get the user manual and the full specifications. A little more searching and you can get a list of all of the known vulnerabilities – most of the time from the vendor websites.
Step four – send a resume to the human resources department of the targeted company in a PDF that loads malware onto their machine referred to as a “RAT” (Remote Access Trojan). RATs are software that create a backdoor that allows the attacker to gain administrative control of the host computer. Many times these can slip right under the radar of malware detection, hidden in PDFs or other seemingly innocuous devices.
Step five – send an email to the engineer from the compromised HR email account with something impossible to ignore – a compensation review for example. That will allow access to the engineer’s PC and provide a base for attacking the hardware devices that the engineer has access to.
As if that weren’t enough, McKerrall walked us through three or four additional ways that network infiltration could be done – cheaply and efficiently. His “finale” showed that when the devices were connected to a network, the LinkedIn charade wasn’t even necessary. He showed us a map of Calgary with the vulnerable devices mapped in – from publicly available data.
Once you reach any of these devices, says McKerrall, “they will happily spit out all the information that you need to allow you to take control of them. They’ll tell you what device they are and what versions of software they are running.”
In short, they give you everything you need to download full schematics of the device. Using this information, even a relatively unsophisticated programmer could mount an attack that would leave no trace.
An academic exercise? Well, until hackers took control of a German plant, or until the Stuxnet malware infiltrated the Iranian nuclear facility or something as “mundane” as Wired’s reported hacking of a car – maybe this was all just a theory. But with Gartner’s prediction of 25 billion connected things by 2020, how many of these “things” could cause real and lasting damage to a company, to a government or to the general public.
So what can we do? Fortunately, McKerrall had some prescriptive remedies if we wake up to the reality of the problem. But unlike information security where you can decide to block access with few consequences, in the device world you can’t simply block access. False positives in the information security world can be detected and remedied when a live person reports that they have no access. A device that is rendered inoperative can cause tremendous financial loss or corrupt a process. You have to be 100 per cent right before you shut down a device and often, the entire process it supports.
McKerrall outlined the four steps that would be necessary.
- First – we need more visibility to our networks and to watch what devices are doing.
- Secondly, we need to log all activity of devices.
- Third we need to create a baseline that classifies behaviour into three groups – allowed, not allowed and suspicious.
- Fourth, and only when we know for sure, we can identify deviations and block them.
Should we be afraid of the Internet of Things? No. We should be afraid of proceeding into it with our eyes closed. Admitting and acknowledging the weaknesses we know about can allow us to find ways to architect a much more secure ecosystem. As so many of the presenters pointed out at this conference – we aren’t looking for perfection. We are looking to reduce the risks to a manageable number.
In taking what could seem theoretical and making it practical and real, this one engineer made the challenges very real indeed. If you haven’t already, check out the video that I captured with McKerrall and his amazing homemade device.