Friday, May 20, 2022

Cyber Security Today, Week in Review for Friday, May 6, 2022

Welcome to Cyber Security Today. From Toronto, this is the Week in Review edition for the period ending Friday, May 6th, 2022. I’m Howard Solomon, contributing reporter on cybersecurity for

Cyb er Security Today on Amazon Alexa Cyber Security Today on Google Podcasts Subscribe to Cyber Security Today on Apple Podcasts

In a few minutes I’ll be joined by David Shipley, CEO of New Brunswick’s Beauceron Security, to talk about some of the recent news. But first a look back at some of what happened in the last seven days:

Yesterday was World Password Day, a day loathed by some IT leaders who long to be rid of risky passwords used by staff and customers. David will have some thoughts about that.

We’ll also look at the obligations of organizations to notify regulators or cybersecurity agencies about data breaches.

Cybersecurity researchers said this week they are seeing more use of wiper malware to totally erase IT systems. David and I will delve into that.

And we’ll talk about a report that many companies aren’t using the security tools that come with software-as-a-service applications they subscribe to.

Elsewhere, Canadian federal, provincial and territorial privacy officers said Ottawa should either pass a law or issue guidelines to police limiting their use of facial recognition technology in criminal investigations.

A threat group called Stormous did not make good on their claim that they would dump stolen data from two alleged victims: Toy manufacturer Mattel and a medical diagnostics and healthcare technology company. Instead, the group’s underground website became inaccessible on April 29th. At this time, it is not known why their site is down.

The FBI updated its recent report on business email compromise scams. Over the past five and a half years the FBI received reports of over 241,000 incidents totaling more than $43 billion. This type of email scam often has crooks pretending to be partners or customers of organizations and telling them to change bank accounts where payments usually go to one controlled by the hackers.

Hackers used the compromised email accounts of 139 people in the United Kingdom’s healthcare system to spread malware over the past couple of months. According to researchers at INKY, the majority of the messages were fake notifications that someone had sent the victim a document or a fax. The goal was to steal their Microsoft login credentials, which victims would have to use to see the document. How the email accounts were compromised isn’t known, but spreading malware though real accounts is one way attackers can make a malicious email seen legitimate.

Separately, researchers at Mandiant reported a Russian-based threat actor compromised email addresses to send infected messages to a number of embassies around the world.

(The following transcript has been edited for clarity. To hear the full conversation play the podcast)

Howard: David Shipley is joining us now. Let’s start with World Password Day. It was yesterday, but that’s the way the calendar worked out. This is a day to encourage individuals to take a critical look at their passwords, because poor and weak passwords lead to data breaches. But it’s also a day IT leaders should look at their password policies. David, is one problem creating a strong corporate password policy and getting staff — and consumers if the company is a B2C firm — to follow it?

David Shipley: I think the biggest challenge is creating an environment where people understand the why as well as what they need to do to be compliant with the password side of things. You know, one of the things that we’ve seen after surveying tens of thousands of users in small, mid and large enterprise customers is the sheer volume of passwords they’re trying to remember these days causes them to create little shortcuts. And that means they’re adding just a couple of characters different [when changing passwords]. Which, from an algorithmic perspective, looks like a unique password. But actually, if you end up having a breach or two criminals can actually learn what your pattern looks like. So people want to be secure. We know that from the research we’re doing, but we need better clearer password policies that set people up for success. Not put them on this path to failure that the old approach to passwords really set us on.

I’d like to see World Password Day evolved to be World Password Manager Day. Maybe that’s the next iterative step — or maybe World Multifactor Authentication Day, to help offset some of the inherent risks. I think one of the the challenges is making sure that [IT managers] use good open source data like the PWNED Password Dataset against your internal passwords to make sure people aren’t reusing or creating passwords that are already known to be bad and breached. That’s one step that companies can take right off the bat. There are great services like Troy Hunt’s ‘’ that that make this dataset actually available as hashes. We need to step up our game in terms of explaining to people what is a secure password, how to do it, giving them better advice about what passwords might already be compromised and how to be safe. A good, strong password is like putting your seatbelt on in the car. That’s great — but we also want to make sure you have other safety measures and it’s important you still drive safely.

Howard: I’m an IT team manager, I set my password policy — you’ve got to have something that’s 16 characters long. From an a technology standpoint. How do I prevent my employee from taking an old password and just adding one number to the end of it? Is there is there something in Active Directory or in other technologies that will scan and will prevent the user from doing that?

David: It can be incredibly difficult to actually prevent that, because what happens is –depending on how you hash or encrypt the passwords — changing one or twp characters creates a unique hash. What you don’t want to do is have some kind of a manual process where you’re randomly giving IT admins access to be to see the raw password that your end users are setting because that’s super dangerous, and it can be hard to catch these computationally. So you have to rely on the users to do the right thing. One thing that can be helpful is again, looking at [reported] breach password sets to make sure at least it’s not a known breach password that they’re adding into. Generally my advice is educate people, giving them a chance to set longer passwords that don’t have to get reset as often. This is one thing that we implemented at at some places where I worked: If you just do the bare minimum 16 character password you’re going to have to rotate that in accordance with whatever the password policy is. But if you go the extra mile and create a longer more complex password you might get to keep it for double the normal time. Or you might get to keep it until we know there’s a good reason to reset it.

Howard: The guidelines of the U.S. National Institute of Standards and Technology — otherwise known as NIST — are often used used by organizations to set their password policies. A couple of years ago it changed its recommendations and said companies should abandon passwords in favor of having employees use passphrases, which are easy to remember.

David: This was a fascinating change. In 2017 Bill Burr, who is the author of the original NIST policy about including random characters and making it complex, admitted his faux pas with the benefit of hindsight. Some interesting context about that policy: When it was being developed in 2002 they had to use research from the 1980s to shape it. So they did their best. They took their best guess, and they didn’t understand the unintended consequences — the aforementioned two-character shift. As well, what’s changed over the last 22 years is the realization that making things longer makes a password more difficult to crack than just character complexity. We think about how fast it is to brute force passwords, so passphrases like ‘thebluebandcoffeecupwithpens’ is a better password than ’69A3421 as a password.’ It’s easier because I can look at say my desk and at a couple of different objects and I can use that as a neumonic to remember. The problem with the 2017 update to NIST [encouraging passphrases] is that it still didn’t recognize the fact that people now have 51,200 online services. That’s why password managers make a lot of sense. In the surveys that we do with employers one of the scariest answers that we get is when we ask people how they create their passwords and how do they keep them secure. A vast majority employees still work in enterprises that don’t provide these tools, and they say they just remember them — which is a red flag for exactly the kind of password character changes at the end of the previous password that set us up for brute force attacks and breaches.

Howard: And there’s lots of companies that offer enterprise grade password managers. So from my point of view there’s really no reason why if you’re an I T manager you shouldn’t be offering password managers to your staff.

David: One of the really cool things is a number of these enterprise password managers can also create segmented instances for people’s personal accounts. Which goes back to your original point about how do we make sure they’re not reusing personal passwords. If you can give employees a tool that gives them the ability to create strong, complex passwords that they can use for all their stuff then you’re greatly reducing their propensity to reuse passwords — and you’re helping keep your employee less likely to to have a personal data breach. So it’s a win-win for organizations offering password managers. The truly paranoid ask what happens if the password provider is breached. Well, the the probability of that being the the root cause of what happens to your organization is significantly less than the probability of what we’ve already seen happen time and time again with people reusing passwords. You know we just hit the one year mark from the Colonial Pipeline [cyber attack] disaster in the United States, which at its core was started with a reused password. So on the balance of risks an enterprise password manager is such a fantastic way of dramatically reducing risk.

Howard: What about moving to a passwordless policy, making staff use fingerprint readers, facial recognition, software or hardware tokens for logins so you you do away with passwords completely?

David: Well, it’s interesting you mentioned that because and passwordless itself is a bit of a lie. It really should be less passwords, because at the end of the day when you dig deep there is usually a password at the end of a [login] chain needed to reset or unlock things. One of the interesting things I’ve seen is the adoption of Windows Hello and other [passwordless] things suggess we don’t have a password. But I need a six-digit PIN [in case they fail]. Well, how hard is that to crack for someone who might have physical access to a device? I’m not downplaying the use of fingerprint readers, biometrics or other things to make it faster and easier for people. But there still are passwords hidden somewhere in the chain. We’re not going easily getting getting rid of them.

If I was looking at enterprises, I’d ask first how many of our IT services can we get on a single-sign-on solution? Tying services into one account is great for a number of reasons. Second, can that account be secured with multifactor authentication that maybe uses biometric or a hardware token. I think in some ways the tech industry really promotes the passwordless vision. The dream it is great but it is nowhere near the actual reality of where we are today or how long it’s going to take. We talk about change management process adoption. The technology to actually get to less passwords has been around 50-plus years or more. It’s probably going to take us 50 some years to get to a less-password world. The only other caution I have about the the race to passwordless, particularly with biometrics is I can reset my password. It’s really really difficult to reset my face if that biometric is somehow compromised [by a threat actor] and reused in diffent attacks. So there’s always a caution. There’s always a fine print to the technological dream.

Howard: If you’re an employer and you want to go passwordless you can order it for hundreds or thousands of employees. But if you’re a retailer and you’ve got tens of thousands, hundreds of thousands of customers how do you convince them to ditch passwords? Isn’t there a risk that they’re going to move to a competitor who doesn’t make them use multifactor authentication?

David: I think this is what held back voluntary multifactor authentication rollouts by big banks and others. If we make this too inconvenient customers go elsewhere. A couple things come into play: The single-sign-on story can also play for retailers. Why get them to create another account? Maybe they can use their Google or their Facebook account [to log in]. The federal government here in Canada set up a fantastic single-sign-on partnership with the banks. You you can use your bank secure account login to access your government services. So why not make it easier to them, and less friction, by reusing a secure account with a trusted provider? It’s win-win for everybody. And those accounts might be the ones who take on the multifactor authentication challenge. Second, make multifactor authentication really easy to use get stood up, and go from there. I’ve even seen folks like [movie chain] Cineplex roll out multifactor authentication. Eventually things like gift card fraud, gift account compromise and other losses get to the point where the potential for customer friction loss is lower than the current actual [fraud] losses. I think that’s what companies need to do to convince business leaders that this [going passwordless] makes the most sense — ‘We’re losing more money by not being secure and having customer dissatisfaction.’ …

My last point if we look at really successful multifactor authentication deployments in somewhat of a customer setting, think about universities. What a lot of them did in Canada was made it voluntary at first. So your keeners got involved, and then you had champions for it, and then the university made it mandatory or took a risk-based approach. There’s many ways to tackle this, but don’t let business inertia or fear stop you from rolling out MFA to your customers. They will begrudgingly thank you for that. Some of them will be genuinely grateful.

Howard: Issue two: Reporting data incidents to regulators and cyber security agencies. This is one of the ways that defenders can keep on top of the latest threats because there’s a lot of sharing of threat information, but around the world there are different or nonexistent reporting rules. In Canada federally regulated financial institutions must report a technology or cybersecurity incident to the Office of the Superintendent of Financial Institutions within 24 hours or sooner if possible — that’s separate from notifying victims. The reason why I want to talk about it this week is that starting next month in India all government agencies and companies — including data centers and internet service providers — have to report cyber incidents within six hours of detection to India’s Computer Emergency Response Team. And they’re also required to maintain IT system logs for a rolling period of 180 days. and be prepared to submit them to the CERT if requested. Meanwhile as of May 1st, U.S. banks are required to notify regulators of computer security incidents within 36 hours of detection. That’s down from the previous 72 hours. David, how fast should authorities be notified of cyber incidents and who should be doing the notification? Should every company have to notify or just those in critical industries?

David: As soon as reasonably possible to report, even though you’re also trying to potentially contain or deal with an incident … So how do we balance those particular things? There’s some important nuances that we have to get nailed down, because not only do we have this disparity of different regulatory reporting timeframes and when do the clocks start — which for global enterprises is becoming an increasingly one hell of a compliance nightmare. We also have some nuances about what actually qualifies as an incident requiring notification and that’s different depending on each jurisdiction. And finally, your last point is if every single company is reporting incidents and incidents are as broadly defined as possible there is no way government agencies are ready to keep up with that. We’re just going to drown the entire system. I think we absolutely need to focus on critical and industries, which not only means energy, telecommunications and banking, but also health care and other areas that are most impactful to society. Get that threat intelligence sharing so that there’s value there, and then gradually look at expanding it out on a basis of economic impact or supply chain. Going from zero to 60 miles an hour this year on reporting is just going to cause whiplash. How are micro and small businesses going to possibly come close to resourcing this capacity, given the IT worker shortage that we’ve talked about globally? The threat environment certainly requires us to get better at sharing information but if we don’t get this harmonized and sorted out it is going to be just a whole bunch of noise accomplishing nothing.

Howard: And you want to have reporting for two reasons: Imagine one bank gets attacked. All of the other banks want to know about it because it could be an industry-wide attack. The other thing is, IT teams in other industries want to know is what were the tactics used in this attack? I may be a shoe manufacturer but I want to watch out for these in case they attack me.

David: Absolutely. So we want early warning that there’s a group potentially working within our industry sector, geography et cetera so we can raise the shields. But we also want to know what can we learn from this mistake? There’s still way too much victim blaming and victim shaming in cybersecurity incidents. You know, for anything that happens in one particular company in an industry I guarantee you there are other companies using the same software or having similar or flawed processes or whatever that root cause of the example was. And we should never waste the pain that comes from an incident if we can use that to improve the security posture of many others. That’s collective defense. That’s how we advance this state of things, and really good mandatory reporting laws for specific industries is where we need to get really good about this.

Howard: What should be reported? In India, in the new bank guidelines they want scanning to be reported. In the U.S. it only has to be an incident that actually caused some harm that has to be reported, so probing and scanning would not qualify.

David: It’s ridiculous to try and and and report everything that’s only being probed or scanned … The U.S. approach, where an incident that caused harm is much more useful. And possibly the tactics, techniques and procedures used by attackers.

Howard: In Canada is there enough incident reporting to an authority beyond privacy regulators, who have to be notified under certain circumstances. The federal government has the Canadian Centre for Cyber Security, which distributes a lot of vendor notices of alerts and notices about the availability of patches as well as a lot of cyber security advice. But it isn’t a regulator, so no company is obliged to report incidents to it. Should companies have to report to the Centre or should the federal government create another cyber agency that federally regulated companies should have to report to?

David: I think the Cyber Centre is the right agency but I think the voluntary [reportong] model has failed. I’ve talked to companies that are publicly traded that are highly regulated, and the first thing that happens when they want to disclose [an incident] to the federal government is legal or the compliance and regulatory folks get involved who say they can’t share this information which might have a material impact on our business. That could negatively impact us, and that’s not in the interest of our shareholders. Whereas if we had mandatory breach requirements that say you have to report this or you will face consequences, the risk logic changes completely. The other barrier to sharing information in a voluntary model is people are afraid of leaks to the pres — and Colonial Pipeline is an example of this. Critical details leaked from a government agency to the media without the company’s consent. So when we set up mandatory reporting it has to be ‘The government will by God protect it, … and we’re going to shield you from additional liability that might stem from disclosure.’ That’s the only way to change the psychology behind decision-making about sharing information today.

Howard: The Cyber Centre would probably say, ‘We’d be glad to do it as long as you double our budget and double our staff.

David: I think this goes back to your to our earlier point about what kind of incidents do we want to qualify for mandatory reporting, by what industries and we do need to resource it. But I think the Centre is the right place because it’s still plugged well into CSE [the Communications Security Establishment, which is responsible for protecting federal IT networks]. We [in the private sector] can use that actionable intelligence as a way to plug into other federal government foreign policy responses — say, to an international-based cyber incident … I wouldn’t create a new agency from scratch. Let’s use what we’ve built, but let’s make it better and easier for companies to share this information. And let’s harmonize with the United States and make sure that we pick the same kind of instance that the Americans have — roughly the same reporting period so that our companies that do business in the U.S. aren’t trying to do this with two different timelines and two different sets of obligations.

Howard: Issue Three. I want to turn to a report on wiper malware. Researchers at Fortinet issued what I call a background paper on wiperware, and Microsoft issued one as well. Since Russia’s invasion of Ukraine we’ve seen more deployment of wiperware. Fortunately, it’s been very targeted and we haven’t seen the kind of thing that happened in the 2017 NotPetya worm which was also aimed at Ukraine. But it spread around the world. David what do you make of these reports?

David: I love the level of detail and transparency we’re getting from these reports in painting the picture. With the Microsoft report in particular we got a much greater sense for a higher level of activity than what had we been made aware of since the invasion started in February. It gives us more of a sense of how successful some of the defensive activities were done, both to prepare and prevent ahead of these attacks. Lessons learned from 2014 onward were applied. They [Microsoft] have also been effective at sharing [cyber] information, co-ordinating responses and shutting down attacks. In particular I think about Microsoft’s response center, because they were the ones on the very bleeding edge of detecting the first wiperware attacks as the physical invasion began. What was also interesting from the reports was how much prep work was discovered. We actually can see fingerprints going all the way back to March. 2021 to lay the groundwork for the cyber attacks that were attempted. I think that gives us a clue as to when and how we might feel some pressure back from Russia on western countries and companies that are seen as unfriendly to it as a result of this conflict …

One of the interesting things from some of these reports is how Russia viewed the layering of cyber and kinetic [phyical] attacks. For example, a TV station broadcasting center was targeted. First, attempts with cyber to try and disrupt as much as possible, with varying degrees of success. That then hen escalated to physical [attacks].

Howard: What are the best defenses against wiperware?

David: The basics keep coming back: Making sure your devices are patched, good user authentication, detection capabilities, network segmentation, you’ve practiced your response plans and you’ve got backups. The thing that we’re missing is to start thinking about what happens when wiperware does more destructive things than just clear the data. What if it actually bricks devices? Do you have the processes to rebuild and get them redeployed? … Think about the attacks against Saudi Aramco almost a decade ago. The Saudis had to charter a plane to Southeast Asia to buy what remaining hard drive stocks were available when there been massive flooding affecting manufacturing factories and there was a massive shortage of hard drives. Make sure that your instant response has a supply chain component to it.

Howard: Finally, last month there was a report on security of software-as-a-service offerings. These include everything from Gmail to Salesforce. SaaS offerings, as they are known for short, are touted as being really great for security because the company that provides the service does the patching. Individual IT departments don’t have to worry about that. However, sometimes there’s a problem in that business units or employees sign up for these services without the knowledge or oversight of IT departments. There was a survey done by the Cloud Security Alliance, and 43 per cent of respondents said their organization suffered one or more cybersecurity incidents because of a misconfiguration by users of these software-as-a-service applications. Thirty-five per cent of respondents thought that the fault is too many departments have access to SaaS security settings. An almost equal number complained about the lack of visibility into changes into SaaS security settings. What’s going on here?

David: This is the foundational problem of enterprise, and it that doesn’t change whether you’re doing IT on-prem or in the cloud: It’s an identity and access management problem. There’s a shared responsibility model between a SaaS provider and a customer. That’s what my company does. It’s up to our clients to set up access and permissions and to manage that. What we need to see happen is better development of standards, particularly for SaaS providers and core identity and access management providers, to come up with a common framework for doing identity and access management. What FIDO (the Fast Identity Online Alliance) is trying to do with password lists is to make a standard to make it easier for privileges to get propagated between central identity management and which can be audited and give that transparency that enterprise IT and risk and compliance is looking for that becomes easy for SaaS providers to implement.

We had a problem with one of our service providers. We wanted to send them secure information using our Sharepoint but their corporate ID blocked access. So we end up having to implement another cloud provider to try and do secure data sharing. Thankfully we have change management processes and everything else so we can track that, but in many other instances people would just yoyo it — open up their Gmail or Google Drive or Dropbox or accounts and just try and do their jobs.

Would you recommend this article?


Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication. Click this link to send me a note →

Jim Love, Chief Content Officer, IT World Canada
Howard Solomon
Howard Solomon
Currently a freelance writer, I'm the former editor of and Computing Canada. An IT journalist since 1997, I've written for several of ITWC's sister publications including and Computer Dealer News. Before that I was a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times. I can be reached at hsolomon [@]

Cyber Security Today Podcast