Site icon IT World Canada

Cyber Security Today, Week in Review for Friday, Oct. 21, 2022

Cyber Security Podcast

Podcast June 1st, 2022

Welcome to Cyber Security Today. From Toronto, this is the Week in Review edition for the week ending Friday October 21st. 2022. I’m Howard Solomon, contributing reporter on cybersecurity for ITWorldCanada.com.

 

In a few minutes David Shipley of Beauceron Security will join me to discuss recent cybersecurity news. But first a look back at some of the headlines from the past seven days:

A mistake by a city of Hamilton employee allowed 450 recipients of an email to see the names and email addresses of everyone the email went to. David and I will talk about why people still don’t know how to use the Blind Carbon Copy feature of email systems.

We’ll also talk about the risks of using real customer data when testing applications after an Australian company admitted its test customer data file was stolen.

And we’ll look at this week’s data from Statistics Canada on the number of cyber incidents that hit businesses here. Eighteen per cent of the 8,000-plus respondents said their firm was impacted by a cyber attack last year.

In other news, members of Canada’s Parliament still had limited access to some interet-based services on Thursday. That was a week after an unnamed cyber threat was detected on the IT system that serves the House of Commons and the Senate. All MPs, Senators and others on the network were forced to reset their passwords. The office of the Speaker of the House of Commons said there is no indication that the accounts of members of Parliament were compromised. An investigation continues.

Researchers at Trustwave warned that password-protected ZIP files are increasingly being used as a tactic for spreading malware. The files come as an email attachment that’s supposed to be an invoice. Password-protected files are a way threat actors try to get around spam defences. The payload could be a backdoor, a cryptomining application or ransomware.

Those behind what some researchers call the Ursnif malware have changed their attack tool. It used to primarily be malware for stealing bank account passwords. But researchers at Mandiant say the latest variant drops a generic backdoor that allows entry into victim’s IT systems. This suggests the developers now aim to have their malware used for distributing ransomware. The report includes indicators of compromise to help security teams detect this malware.

Finally, researchers at SafeBreach discovered a new PowerShell backdoor that is being distributed through an emailed job application. The recipients might be getting the emails by responding to a phony job offer on LinkedIn. What’s worrisome is this backdoor has — until now — been undetected. One of the commands the backdoor can execute is to access Active Directory to count how many users are in the victim organization and how many instances of Windows remote desktop there are. The researchers believe 100 victims have been targeted in this scheme.

(The following transcript has been edited for clarity. To hear the full discussion play the podcast)

Howard: This was in some ways a week like any other in cybersecurity: News about data breach, data thefts and blunders by employees. A bit disheartening during Cyber Security Awareness Month.

We’ll start with a report about an employee of the city of Hamilton, Ontairio doing what many company staffers do: Sending out an email to 450 people on a list. Except this staffer didn’t use the Blind Carbon Copy option in the email. Instead, the of names and email addresses was pasted into the ‘To’ section for recipients. So all 450 people the message went to were able to see the names and email addresses of the others it went to. Had they used the BCC option, recipients would only have seen their own email address. Blind Carbon Copy limits lists of email addresses to be viewed to only one at a time. David, what did you think when you heard of this?

David: First of all, BCC is the demon of email because it’s either used for passive-aggressive purposes, or as in this case, it will bite people because they meant to BCC but they put it in the C or the ‘To’ field. This happens to every single organization. If it hasn’t happened to yours yet, it’s only a matter of time. It’s one of the oldest privacy breach mistakes in the digital book and it’s a combination of human error and bad process. It’s vital organizations not to treat their email platform as the communications hammer for every single problem or use case when communicating with clients — or in this case with citizens. It’s important to use tools that have security and privacy controls baked in, like using a customer relationship management platform or a mass mailing platform like Mailchimp. That ensures that an individual gets their own dedicated message with no chance of people seeing other people’s information. And this is just a regular reminder that Microsoft’s Exchange message ‘Recall’ function doesn’t work when you’re sending messages outside of your organization. There’s no ‘Undo’ this kind of mistake. And the ‘Recall’ function works really poorly when sending something even within an organization.

Howard: It could have been worse. You know the message in the Hamilton email as far as I know didn’t have any personal information. It was about instructions for voting by mail for the upcoming municipal election. But, someone with criminal intent could have used that list of email addresses to send spam or sold the list to a crook.

David: This is an important point when evaluating privacy breaches that’s often overlooked: Something in Canada called the real risk of significant harm, or the RROSH. It’s a harm test that makes the difference between whether you absolutely have to report a privacy breach to a federal or provincial privacy commissioner. For example, if you get hit by ransomware and they extract a bunch of personal data from your environment there is a real risk of significant harm to identifiable people. If an employee accidentally sees information they shouldn’t have maybe there’s a lower risk and you don’t hit that real risk of significant harm threshold. In this case I think the risk overall is low. However, it’s still a good idea to engage the privacy commissioner if you have an event like this. Why? Because the privacy commissioner is not just there to rap people on the knuckles. They often provide really good advice and that help find gaps in an organization’s people, process or technology, and lessons you can learn to avoid future privacy headaches.

Howard: In this case, the municipality had to report to the provincial privacy commissioner under Ontario’s municipal privacy law.

David: Again, it’s not a bad idea It’s the best free advice you’re going to get from professionals. And here’s a crazy thought: Do something proactive, not after you have an incident. Talk about the processes your organization may have and what risks might be out there. If you’re messaging large groups on a regular basis make sure you know how to do so safely.

Howard: Because this is a common problem, is it a matter of every time a staff member in any organization is asked to send out an email blast to a large number of people that a senior person should remind them, ‘This is the proper way to do it. Make sure that that you put that list into the Bcc’? And as I ask the question I’m thinking at the same time, what if it was a senior person who did this in Hamilton?

David: It’s vital for senior leaders — particularly in private sector firms and executives and directors — to be aware when anyone is being tasked to send mass communications that might commercial nature. There are other laws that have even more teeth than privacy laws. In Canada there’s the Canadian Anti-Spam Legislation, known as CASL. It demands that people who are in communication roles are trained how to do it properly if the email or text messages are commercial. Executives and directors who aren’t paying attention to the people delivering these kinds of messages can be held individually liable for breaches of the legislation. Lots of Canadian businesses mass email, and they may not be paying attention both to the privacy and the anti-spam aspects.

Howard: News item number two: A database of real customer data was stolen from an application testing server belonging to an Australian wine distributor. The company said that the data included customers’ names, addresses, dates of birth and phone numbers. This is great data for crooks for sending out phishing messages or for creating phony ID. The incident happened last month and was only reported this week. It happened during a test of a system upgrade. The company was defensive. They said that given the scale of this upgrade and in line with industry practice a customer database was used to critically test the platform. This raises an old question: Should developers use real data or phony data when they’re testing their applications?

David: Hell no, absolutely not. Can I say that any stronger? You can create artificial data. Yes, it does take time and effort to build a script that’s useful,l but you can do scale testing using generated data. This is a common mistake startups make early in their life, and like the Bcc story earlier, eventually somebody makes a mistake in configuration and out the door goes customer data. Using real data in a test will always bite you in the backside.

Howard: In this case, the company said the test data wasn’t connected to the company’s website, as if that made a difference because the crook somehow got access to the test server and they got the data anyway.

David: Most big data breaches that we see come from insecure cloud environments. It’s not about the website per se. It could be how an Amazon S3 bucket is configured or other aspects of the cloud environment … But if you’re running test and it’s the same data as production and you’re not watching it, you’re going to get bit. This is where things like developers spinning up instances and grabbing data. It [cybersecurity] is people and process and technology.

Howard: In doing the research for this item I came across a blog by a New Zealand application developer who maintained that test data has to reflect real data that’s going to be used in production. He recalled working on an application that was developed using supplied test data that was meaningless, so much so that when real data was used to test the application when it was close to going into production the application didn’t work. So he argues that the lesson is test data has to accurately reflect the real data that’s going to be used in production. But if that’s so, how does the organization protect itself?

David: Here’s an example using my company: We created artificial data for testing. We know all the fields involved in our product, a security awareness platform. We created a series of scripts that can populate up to a 100,000 people — fake names, first name, last name, fake job titles –and it looks like the real data. But there’s absolutely no risk to us or our clients. I can’t think of a use case where you can’t create suitable artificial data.

Howard: News item three: I’m going to go back and in time just a little bit but I want your opinion. At the SecTor cybersecurity conference in Toronto earlier this month one speaker I heard said the following, and here I’m paraphrasing: If your organization suffers a significant breach because a user was phished and the steps that the threat actor took to get that data were pretty simple and the security or IT team didn’t see what was happening it’s not the user’s fault. Your security architecture needs to be looked at. What do you think?

David: It raises a really important point. When we talked in the 70s and 80s about the positioning of gas tanks in vehicles and if there was a side impact on a particular manufacturer’s truck it exploded, it wasn’t the fault of the driver because they ended up getting into an accident. The reason it became a catastrophic accident was the engineering design of the gas tank. Similarly, in the recent Uber breach a person got phished and their credentials were captured and hackers Uber’s environment, where they found scripts with passwords. Really successful organizations that look at security awareness don’t just tell people that phishing is a thing to look for, they build and sustain a security culture. They look at how it can be applied throughout their organization, including the security architecture choices. It means giving people the time, the money and the resources to ensure that the security architecture is secure. The Swiss cheese of security awareness teaching is getting users to avoid phishing. That can be your first layer of defence. Your second layer is your architecture and your technology controls.

Howard: This is another argument for ‘Don’t blame the employee.’ On the other hand, the speaker’s argument had a certain number of ‘ifs,’ like, ‘if the attack is simple and the attacker doesn’t have to get past complex defences by using complex techniques.’ But doesn’t that describe a lot of successful attacks?

David: It does. It takes a combination of all the right things falling into place. My analogy for being on the defensive end of cybersecurity is being the goalie for an NHL team and you’re the only player allowed on the ice — and the attackers get to fire as many shots on goal as they like. Some of them might be easy and some of them might be really amazing shots, but you as the goalie have just got to get wrong once and they score. If your organization get a phishing test click rate below five per cent you’re lowering those shots on goal.

Howard: The final news story I want to look at was the survey Statistics Canada put out this week on cyber incidents suffered by Canadian businesses in 2021. This wasn’t the usual vendor survey of a couple hundred organizations. There were over 8000 respondents. Here are some of the numbers: Last year 18 percent of responding Canadian businesses said they were impacted by cybersecurity incidents. That compared to 2 per cent of Canadian businesses in both 2019 and 2017. What stood out to you in this report?

David: We saw about a 14 per cent decline in organizations reporting that they’d been impacted by a cybersecurity event. [The percentage change between 21 per cent in 2019 and 18 per cent in 2021 is a three per cent drop, but compared to the original number its 14 per cent of 21]. That was positive. But the report also says that in 2019 the private sector was spending just under $7 billion on cybersecurity. The amount had jumped by 40 per cent to just shy of $10 billion in 2021. That’s a hell of an increase in spending on cybersecurity. We dramatically increased spending, but the cost to businesses [of incidents] was $400 million in 2019, which skyrocketed 50 per cent to $600 million in 2021. So let’s just recap: We spent 40 per cent more and we had 50 per cent larger losses. And interestingly enough, the number of businesses reporting that they spent something on cybersecurity only went up by one per cent 62 per cent in 2021 compared to 61 per cent in 2019. They spent more on cybersecurity and still lost more. Which tells me that individual incidents continue to get more expensive. That matches some industry research.

Howard: What struck me were the numbers around ransomware. Eleven per cent of the 18 per cent of Canadian businesses who said that they were impacted by a cyber security event were hit by ransomware. That’s less than two of the total number of those who said they were impacted. The other thing that I found quite notable was that of those who were hit by ransomware 82 per cent said that they did not pay a ransom.

David: I don’t believe that number. The reason why is it’s so different than all of the private sector studies. In fact CIRA, the Canadian Internet Registry Authority just published a report with a smaller sample size — 500 respondents –but they’re saying 70 per cent of people who got hit with ransomware paid. Other industry studies put this at 50, 70 per cent. We’ve seen that people can be really skittish about telling the government that they paid, even to the point where you reported a week ago the RCMP tried to give money back from the Netwalker ransomware attacks to Canadian victims and they wouldn’t take it. I don’t buy 82 per cent of firms didn’t pay the ransom/

Howard: On the other hand the difference between an industry study and a Statistics Canada study is it’s an offence not to be honest to Statistics Canada

David: But if you paid a ransom to a group that was on a sanctions list, you’re not going to say [publicly] we paid.

Howard: The other noteworthy thing in this survey was that only 10 percent of businesses that were impacted by a cybersecurity incident said they reported it to police — and that was down two percentage points from 2019. Police are eager to hear from corporate victims of cybercrime because it helps them understand the extent of the problem and they can get together internationally and take down some of the infrastructure of these criminal groups. A panel of Canadian and U.S. police officials that I covered at a Toronto cybersecurity conference earlier this month made that point. The police said we’re waiting for your phone calls.

David: That decline is very significant. That would be a 20 per cent drop [2021 compared to 2019]. Some of that is increasingly cyber insurers say what you can and can’t communicate to police if you want to maintain your coverage. That’s bad. I have significant concerns about that. Second, as we talked about before, even when the cops have your money [recovered from paying a ransom] and they try to give it back firms still don’t want it. There is deep fear that if this gets out it’s going to cause a reputational hit to the organization — plus they didn’t admit it [the ransomware attack] in the first place.

Howard: Finally I want to note that this survey left out federal, provincial, local governments, school boards. This was just a survey of businesses. I wonder how much that would have changed the numbers.

David: I think it would have a big impact on how many paid ransoms. The CIRA report makes a point of dividing businesses from the public sector in some of their responses. The sample size of 500 is not the same as the StatCan report but there’s some really interesting data in there as well.

Exit mobile version