Cyber Security Today, Week in Review for February 18, 2022

Welcome to Cyber Security Today. This is the Week in Review edition for the week ending February 18th. From Toronto, I’m Howard Solomon, contributing reporter on cybersecurity for

Cyb er Security Today on Amazon Alexa Cyber Security Today on Google Podcasts Subscribe to Cyber Security Today on Apple Podcasts


In a few minutes I’ll be joined by guest commentator Terry Cutler of Montreal’s Cyology Labs to discuss some of the news from the past seven days. But first a look back at some of what happened:

The FBI issued another in a series of descriptive advisories on individual ransomware groups. This one was on the BlackByte gang. The next day the gang listed the San Francisco 49ers football team as one of its most recent victims. The team has only acknowledged a network security incident.

Meanwhile the ALPHV ransomware gang claimed a Canadian meat packer is its latest victim. We aren’t naming the company because the attack hasn’t been confirmed. But as of Thursday, when this podcast was recorded, the company’s webpage was displaying message saying the site was experiencing technical difficulties.

The International Committee of the Red Cross says failing to patch a critical vulnerability was to blame for last month’s hack of its servers. The Swiss-based agency has a patching program. “Unfortunately we didn’t apply this patch in time,” it said in a statement. The unpatched application was Zoho’s ManageEngine ADSelfService Plus, for Active Directory password management. Data stolen included names, locations, and contact information of more than 515,000 people from across the world receiving services from the Red Cross and the Red Crescent Movement. The Red Cross believes it was a deliberate attack by a sophisticated group.

Russian-backed hackers have been targeting what are called cleared defence contractors in the United States for at least the past two years. This is according to the FBI, the NSA and the U.S. Cybersecurity and Infrastructure Security Agency. The victim firms have sensitive intelligence and military contracts. So far what the hackers got is unclassified but important information. Often the hackers use brute-force attacks to crack passwords and spear-phishing emails with malicious attachments. If they aren’t alerted by now, defence contractors should be.

And researchers at Proofpoint issued a background paper on a hacking group that has been targeting the aviation, aerospace, transportation and defense industries since 2017. Typically victim firms are infected when an employee is tricked into opening an email with a message about an invoice or request for a competitive bid for a product needed by a fictitious company.

(The following transcript has been edited for clarity. To hear the full conversation, play the podcast)

Howard: We’re going to talk about a couple of other stories that I came across. Researchers at Crowdstrike issued their annual global threat report this week and among the findings: It took hackers an average of an hour and 38 minutes to move from an initially compromised device to another one and that basically means it defenders have about 90 minutes detect and stop an intruder on the first computer server or domain controller they crack. What do you think of this.

Terry Cutler: 90 minutes is a very long time. And here’s the issue I see often: Most companies don’t have proper detection technology in place to find a hacker in there. I’ll give you an example: I run into this exact challenge when I get hired to do penetration tests. The first thing we do is run a vulnerability scan to see what’s vulnerable in there, and run some exploitation tools after that to see what we can compromise. And during that whole scan nobody knows we’re in there. We’re not even being quiet about it. They [defenders] don’t have any sensors to say, ‘You know there’s an attack happening here.’ Once we [the testers] compromise a machine we’re able to deploy an agent, so we use professional tools to help speed up our work. We become a system level service, and from there we can actually do what’s called a migrate process, which allows us to hide our process from the hacked agent into let’s say svc host, which is a legitimate Windows process. So now when the [defence] investigators go and look, we’re hidden within a legit process.

On top of that our tools communicate back to us in an encrypted fashion so they can’t intercept our transmission. Now we can do a pass the hash. This is an attack where you can use the [credential] information and pass it off to a server. It could log me in as you without ever knowing the password. We can also do what’s called an agent pivot where we make it look like another machine is actually attacking the network and not the one I’m in …

Howard: So we’ve got a couple of problems here. One is cyber security experts say if there’s a determined attacker who has the time and the money they’re going to get in. That’s what an IT department or the chief information security officer have to assume and therefore they have to defend against. But the other thing is there’s this 90 minute average where it can take a hacker to go from the initial compromise to start to get into other devices. Is there any kind of a standard that anybody set that says you ought to be detecting a suspicious attacker within a particular amount of time?

Terry: Going back to my penetration test, if I gain access to one box I can compromise another box in as early as 20 seconds — if all the stars align. I can understand how the hackers are trying to keep this under and around the hour and a half mark: It’s because they’re trying to be as quiet as possible so they don’t set off any triggers. That’s that’s normal. When we get hired [for a test] we’re not there to speed up the job, so we want to make sure that their alerts [the defenders’] are actually working. Going back to your point about if a determined hacker wants to get in he will, It’s absolutely true. There is no silver bullet to stop a hacker The only thing you can do is make it as hard as possible for them to get in and have some monitoring in place to help mitigate this.

If you look at things like you know ISO or NIST (National Institute of Standards in Technology) standards, they’re all telling you you need to have [network] monitoring in place. You need to have log management in place, endpoint detection and response [EDR] technology. So if the attacker gains a foothold on the endpoint, the moment he tries to move laterally it’s going to trigger some alerts.

But here’s a challenge I see in a lot of companies: Yes, they’re collecting log information, but nobody’s monitoring it. So they’re just collecting event data and when there’s a problem that arises they go back to logs they say, ‘Oh yeah, we got breached seven months ago.’

Howard: So why are many IT departments so slow in detecting attacks and what can be done to improve the rate of detection?

Terry: The biggest key here again is no detection technology in place. Management, especially the old school management, believe that if I have a firewall, if I have encryption, if I got strong passwords I’m safe. But that’s legacy traditional technology. Once cybercriminals bypass this they get into your environment. [Defenders] don’t know that there’s something in there and the only time they’re going to know is when something crashes. For example, you know 1 thing that could happen during a penetration test. For example, when we [in a pen test] exploit a box the box becomes unresponsive and now the IT department has to come in and maybe reboot it or look into it. It’s only at that point they’re going to realize why is this always happening. They just don’t have the proper visibility in place.

You know we’ve had some clients where they’re deploying three different vendor products to help protect their environment: One vendor for Windows desktops, another vendor for the Windows servers, another for the Linux servers. You have to go through three different areas to find the information you’re looking for There’s not one centralized dashboard that’s going to alert them, so everybody in IT always piecemealing everything together.

Howard: The report also noted that attackers are increasingly using the tools in Windows once they get into a network to further their attacks. They’re not adding specialized tools which may be detected by the IT defenders. They’re using the Windows tools that are already in the system against the system. And this is called ‘living off the land.’

Terry: This actually happened to a client: Cybercriminals got in and launched Windows tools like Truecrypt and Bitlocker to lock up 400 computers. The company even had EDR deployed. How on earth is this possible? How did EDR not stop this? Because these [Truecrypt and Bitlocker] are legitimate tools that have been whitelisted to be used within the system. They were asking $10,000 per [encrypted] computer. In this case, you know there were 400 computers, which means they were asking $4 million to get the keys back to unlock the machines.

Howard: So what will block a living-off-the-land attack?

Terry: IT will have to blacklist all tools, and then whitelist them as needed. If you know you’re going to be deploying BitLocker to encrypt some machines you have to disable it once it’s done. But if you’re dealing with large environments with thousands of computers it’s going to be a real management nightmare.

Howard: Was there anything else in this report that caught your eye?

Terry: … What we’re seeing more of right now is attackers are breaking in through credential reuse or vulnerabilities, versus installing malware. I find it interesting that the people I speak to find EDR is way too expensive. Business owners don’t understand the value in this. ‘Why do I have to spend ten thousand bucks on this stuff?’ But they don’t realize that a breach will cost like 30 times more. And if they want cyber insurance, EDR could be a requirement.

Howard: Another report I found suggested organizations continue to be lax in revoking computer access when employees leave. There was a survey of a thousand employees in the U.S., the U.K. and Ireland. 83 per cent of them said they were able to continue accessing old email social media and application accounts when they left their employer. And of those people, 56 per cent said that they had used that access to harm their former employer. 24 per cent admitted they intentionally kept a password after leaving the company. For their part 74 per cent of employers surveyed said that they have been negatively impacted by a former employee breaching their digital security.

Terry: It really comes down to identity and access control. I’ve been in the boots of an IT help desk in a past life, and things are moving so fast that IT doesn’t have time to keep track of everything. They need to know if an employee is leaving on a certain date. They can just set an expiry date in Active Directory to shut down their account on that date.

We sometimes deal with environments that have 16,000 employees and. Management often is not informed that this employee’s leaving, so when we [as external auditors] come in and do an audit and run an audit against Active Directory we see 150 employees have not signed in over 90 days. And we’re told, ‘Oh yeah, these people are all gone.’ We also see things like they’re not removing employees from other online services. So, for example, maybe you had an employee that manages your website in WordPress or handles your Facebook marketing. They’re not being removed from the directory. We have even seen a managed service provider that still had TeamViewer [remote access] into a former customer. And when this MSP got ransomed last year it included this former customer.

Howard: So there are two problems: One is the HR department isn’t informing the IT department of who’s leaving, and then this other problem where a service provider fails to decommission a former customer.

The other story that I reported on this week was Blackberry’s annual threat report, which included a warning that small and medium-sized firms are increasingly being targeted by hackers. That’s probably no surprise to you in your work.

Terry: That’s what’s keeping me busy. Here’s the thing: Cybercriminals know small businesses don’t have the time money or resources to do cyber security well. When I talk to business owners about all the tools that would help prevent attacks, it’s mindblowing what they tell me: ‘Who’s gonna want to hack us? We’re a flower shop who’s gonna want to hack us?’ They’re telling me things like, ‘Cybersecurity is not interesting.’ They see no value in it because they believe they’re not a target. Or there’s too much technobabble. They don’t understand when the IT department is trying to talk to them. Management just sees a price tag. They don’t want to be educated on the problem or how it could be prevented. They just want action taken, now.

We were dealing with four breaches in December alone where the firms all claimed to have been hacked by a sophisticated hacking group that took control of their systems. But after doing a dark web check we found their administrator passwords had leaked.

Another issue is firms outsource their technology to cloud providers and they think they’re safe when in fact, they’re actually a lot of times they’ve misconfigured their Amazon [data storage] buckets and were able to be compromised.

Howard: I also want to talk about an organization that got hit because there was just one vulnerability, as I mentioned at the top. The International Committee of the Red Cross failed to patch one critical vulnerability and a sophisticated hacker who was probably watching for an organization that hadn’t closed this vulnerability pounced. The International Red Cross said it installs thousands of patches a year. The implication was ‘it’s not like we’re incompetent on this,’ but all it took was one patch that wasn’t installed.

Terry: When you start mass deploying this amount of patches a year something’s going to break somewhere. We used to be able to really thoroughly test a patch before deploying it into production and just to see what would break. But now it’s at the point where stuff is flying at us so fast we have to mass deploy these updates and hope for the best.

Howard: I can remember the Equifax hack. One of the problems was there was a patch for a server that was overlooked. They had an automated vulnerability scanner on the network and it was supposed to scan for every server and every device and give an alert on when a patch was ready to be installed. But this particular patch was missed. … And IT managers knew that this patch had been issued and that that particular server needed to be fixed. But they assumed that IT the guys in the trenches who were actually looking after this server knew about it, so didn’t message them.

Would you recommend this article?


Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.

Jim Love, Chief Content Officer, IT World Canada
Howard Solomon
Howard Solomon
Currently a freelance writer, I'm the former editor of and Computing Canada. An IT journalist since 1997, I've written for several of ITWC's sister publications including and Computer Dealer News. Before that I was a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times. I can be reached at hsolomon [@]

Sponsored By:

Cyber Security Today Podcast