Welcome to Cyber Security Today. This is the Week In Review edition for December 4th, 2020.
With me this week looking at one interesting event is Terry Cutler, CEO of Montreal-based Cyology Labs. To hear the podcast, click on the arrow below.
First a look at the top news in the past seven days:
Word that nations are about to get the first shipments of COVID-19 vaccines should be good news. However, Interpol and IBM this week issued warnings that criminals and countries are likely going after the vaccine supply chain, either to steal the serum, to infiltrate the computer systems of companies to steal data or to encrypt data and hold it for ransom. In September IBM learned of a spear-phishing campaign against companies that make specialized equipment to keep vaccines cold.
Clumsy software developers and employees continue to give executives grey hairs. In Brazil the personal information of more than 243 million people held by the country’s Ministry of Health could have accessed for several months because web developers left a password for a database inside the source code on the ministry web site. This was discovered by a Brazilian newspaper. The same newspaper revealed last week that an employee at a Sao Paolo hospital uploaded an unprotected spreadsheet that included usernames, passwords and access keys to sensitive government systems to the Github developers platform.
There’s more: Reporters at the TechCrunch news site recently found unprotected data holding thousands of patient records and lab reports for American psychiatrists and therapists on a server belonging to a customer of NTreatment, a San Francisco-based provider of a cloud-based medical practice management software suite. Not only was the database not password-protected, the data wasn’t encrypted.
And The Register reports that a Cayman Island investments fund left its entire secondary data backups open to anyone after failing to properly configure data left on a cloud storage service.
The lesson from all these incidents is some people still aren’t getting the security message.
If your IT department finds evidence that unapproved crypto mining software is running on its servers, it may not have been planted by a criminal. Microsoft reported this week an unnamed country has been installing crypto mining apps as part of a cyber espionage campaign. The thought is the miners may not be spotted, allowing the country to rake in some cash. Or IT staff may be distracted cleaning up the mining apps and may not notice spying software is quietly moving through the computer network.
The importance of publicly reporting these incidents will be the topic of today’s discussion.
Finally, the online education company K12 has acknowledged paying a ransom to get access to its data back. The attackers managed to scramble back-office systems that included some student data and other information. The ransom was at least partly paid through cyber insurance.
What I want to talk about this week is something I didn’t report on because it’s ongoing, and that’s a hearing before the U.S. Supreme Court on the interpretation of a piece of legislation. The law is called the Computer Fraud and Abuse Act, which makes it an offence to access a computer without authorization or exceeding authorized access. In this case an American policeman was paid by a woman to use his access to a police database to check information for her. He’s appealing his conviction. The question before the court is how wide or narrow is the interpretation of “without authorization.”
What makes this appeal controversial to the cybersecurity industry is a wide interpretation might make it unlawful for researchers, or reporters, to find and report holes in computer systems. In fact one company has filed a brief with the court arguing researchers shouldn’t be allowed to poke around without the prior approval of a software or hardware company.
There’s an argument that public reporting of vulnerabilities encourages cyber attacks. Others argue public reporting of vulnerabilities encourages companies to make sure security is tight or risk being embarrassed. Regardless of the country there has to be some law here.
As I said earlier, with me today is Terry Cutler, CEO of Cyology Labs, which offers managed cybersecurity and penetration testing services. Terry knows a bit about snooping around the Internet for holes because he’s a certified ethical hacker. Hi Terry
First let me start by asking what is an ethical hacker?
Terry: “An ethical hacker is essentially a professional that has similar training to the bad guys. What happens is we get hired by companies to legally try to hack them, help them find the holes before the bad guys do and then provide a nice report and say, ‘Here’s what we found, and fix this up.’ And the biggest difference between the ethical hacker and an unethical hacker is the customer is going to receive that report. The bad guys won’t give that to them.”
Q: There are some researchers who hack to build their reputation and are eager to publicly reveal vulnerabilities, and others who tell companies of the problems and give them time to fix them before going public. Who’s the good guy?
Terry: “They’re both good in their own little ways. So obviously the best way is responsible disclosure, saying that we found a flaw in your system. I’ll give you some time to disclose it because I’m sure if I found it, somebody else found it too. So please get it fixed as soon as you can. We give you like 90 days, let’s say, but sometimes if we find a significant flaw that requires a fundamental change to the vendors’ operating system that could probably take more than 90 days to fix. So it’s important that they disclose it properly and work with them to find a balance between the two.”
Q: I can understand an organization being ticked off if it isn’t warned of a vulnerability and given time to close it. But some software companies say ‘This isn’t a big vulnerability, we’ll close it later.’ So the researcher discloses. Do you think that’s right?
Terry: “I do because the company was advised and signed off … When [a] hacker reveals what was found, then maybe other researchers or unethical hackers or cybercriminals will take advantage of it and make the flaw even worse than what it was … Going back to your original question about people trying to build up the reputation, if they’re using this for criminal intent or to make a name for themselves, maybe they won’t get hired in the future because they weren’t authorized to test that person’s environment. Like I said, it could be testing the person’s environment, but during a penetration test, bad things can happen. You know, the software could crash, which could have other impacts, especially if you’re in healthcare or, or, or voting and whatever the system goes down. And maybe someone’s on life support that, that requires the attention of another device that links into it. You know, you don’t know what you’re going to take down. So that’s why it’s important to work in hand with the company to disclose what you’re doing.”
To hear the full podcast click on the Play arrow at the top.