Welcome to Cyber Security Today. This is the Week in Review edition for the week ending Friday, December 16th, 2022. From Toronto, I’m Howard Solomon, contributing reporter on cybersecurity for ITWorldCanada.com.
In a few minutes David Shipley of Beauceron Security will join me to discuss several events from the past seven days. But first a brief recap of some of the headlines:
The contact database of the FBI’s partnership program with businesses was stolen and is up for sale on a cybercrime forum. Security reporter Brian Krebs reports that the database of the InfraGard program has 80,000 names of people in cyber and physical security working for critical infrastructure firms like utilities and manufacturers. David and I will talk about this incident.
We’ll also look at the importance of IT department policies on session cookies after security researchers found a problem with them in Atlassian products.
David will have some thoughts on the cyber war between Russia and Ukraine. And we’ll discuss whether IT workers who don’t patch government systems fast enough should go to jail.
A small Ottawa-area web hosting company acknowledged it was hit by the ransomware gang calling itself Cuba. The company, 2NetworkIT, says that it was able to restore almost all customer data from backups in 48 hours.
California is investigating a cybersecurity incident after the Lockbit ransomware group claimed it stole data from the state finance department. According to TechCrunch, the group says it stole 76 GB of data.
The Play group also claims to have hit a film school in Canada. At the time this podcast was recorded that claim hadn’t been confirmed.
Microsoft had to shut the accounts of several of its hardware developer partners after threat actors used their access to issue digitally signed malicious hardware drivers to victims. One of those threat actors is alleged to be the Cuba ransomware gang.
Twelve months after alerting the public that Ontario’s COVID-19 vaccine management program had been compromised, the province began notifying residents whose personal data was copied. It took this long to compile the list of 360,000 victims.
Software supply chain security is one of the most critical risk to any organization, says Google. So it launched a new research report detailing how developers should make open-source software more secure.
Speaking of supply chain security, the cloud backup of an IT asset management company called Teqtivity was hacked, resulting in the leaking on the dark web of corporate data from one of its biggest customers, Uber. The data included names and email addresses of thousands of Uber employees.
(The following transcript has been edited for clarity)
Howard: The first news item we’re going to look at is the hack of the FBI contact database of people in critical infrastructure. Apparently, a hacker impersonated a real CEO to get into the database, and then was able to copy it. This is pretty embarrassing.
David Shipley: This is totally an, ‘Oh, noooo,’ situation. And I gotta say, just a reminder to all us we are all one bad identity and authentication process away from a bad day like the FBI. That being said, this one is particularly bad. This is the InfraGard program, which normally is a great idea: Bring together all of the critical infrastructure — bank energy telco –folks to share threat information. This is what industry is begging for: A safe, secure, vetted forum so we can communicate and collaborate as well as the bad guys can. If the FBI is listening today, I love it. Keep it up. Let’s work on the processes.
What happened is a hacker going by the name “USDoD” was on one of the forums bragging about applying to get into the Infragrad program using stolen personally identifiable information, using the person’s phone number. He also used a separate email address which the person didn’t control, so when he got approved that email was the option given for second-factor authentication. When the kid gets in he was able to use an API that was apparently available to anybody that was a member and then siphoned out all the information on members.
In terms of sensitivity, the information is mostly publicly available from LinkedIn searches and other things. How dangerous is that? Well, it probably saved the Iranian and Chinese state-sponsored hacking teams some time updating their Maltigo [Maltego is software used for open-source intelligence and forensics] and their maps of relationships and all that stuff. So relative severity.
What did freak me out was Infragard had an internal messaging forum so members groups could communicate with each other. This hack could have been used to deliver malware to C-suite executives. So let’s do better next time.
This kid put the information up for sale for US$50,000. To him I say I hope you’re ready to spend the rest of your life looking over his shoulder, because if there’s any group that can be patient it’s the FBI. These guys are going to hold a grudge. So when your new [criminal] forum gets popped — and it will get popped — and when those admins get squeezed — and they will get squeezed you’re going to get a knock on your door someday. This was totally not worth it.
Howard: So this is a failure of bad human processes by the FBI for not thoroughly vetting?
David: This is social engineering 101: Impersonate people and become a trusted identity. This is a failure of the identity and authentication process. They should have made sure that there was an actual verification: Pick up the phone and call the CEO of the bank. ‘Did you apply to be part of InfraGard? Yes? Validate the following information. Yes, we’re going to send this to your bank, or your organization’s, controlled email address. Did you get it? Yes, you’re going to log in. Great.’ I don’t want to say they shouldn’t have had a program. But the value and trust that people put in this kind of system is the vetting. That’s where this kind of fell apart. Second, we could get into the API security on this but probably a CEO user shouldn’t need to be able to with their level of access query the API and extract everybody’s data.
Howard: Where do we see in government and the private sector a similar sort of database of users being constructed where it’s basically you apply and you get approved? You’re applying for a credit card, you’re applying to get access to a retailer so you can purchase things online. I suppose it’s pretty broad.
David: This was a little social media platform. All you newly-minted Mastodon admins out there should take a page from this incident. You’ve got a responsibility to your community. Again, this kind of shenanigan can happen in multiple contexts to multiple people.
Howard: The second item I thought we should look at is a warning that went out to IT administrators whose organizations use products from Atlassian. These include the Trello project management platform, the Confluence collaboration platform and the Jira IT service management platform. There’s a problem with the session cookies generated by these applications. A session cookie can’t be used by a computer other than the one that generated the cookie, and therefore a session cookie is a security advantage. However, they’re supposed to be temporary in that when you exit the application or you close your browser the session cookie gets destroyed. That’s because session cookies have sensitive information. Researchers at a company in India called CloudSEK discovered after investigating a hack at their company that the Atlassian session cookies can last for as long as 30 days if the user doesn’t log out or close their browser or shut their computer. That’s how the hacker was able to get the CloudSEK staffer’s password and into its IT environment through a captured log showing a staffer’s session cookie.
After being informed Atlassian says it’s revoked the problematic session cookies. But this problem with session cookies lasting a long time isn’t new.
David: No. This enters the grand debate of usability and security. How often do you force your admins to log out of stuff? How long are they allowed to keep their windows open? This is not necessarily an inherent flaw in the architectural design of session cookies and authentication mechanisms. It’s just a reality of choices that have to get made. I asked a few of my security team members for their thoughts on session cookies, and they raised an interesting point: If someone’s able to steal your cookies on a computer you’ve got problems: You got somebody in your network. You could have lots of stuff getting intercepted. This is part of a broader headache that’s happening within your organization. What platform providers can do is give choices to organizations: Should we kick your admins out after a day? After an hour? Depending on the sensitivity of your information system how much do you want to annoy your grumpy IT admins? And the reason why IT admins are grumpy is they’ve got tons of tools to manage. This is why I keep pointing out to all the people who think the latest hardware-based token ID will be perfectly secure there’s always going to be a flaw. There’s always a risk in identity and access management — which ties really well into the first part of our conversation [ on the FBI]: Proving identity, and securing identity is hard. We make compromises to make systems usable. We can have perfect security — no one can log into the system. Finding the balance [between security and usability] is a lot harder than people think. There are really good tips, things organizations can do. The OWASP folks provide some great advice about making sure that it’s it’s not easy to guess what an active session ID could be so you know can’t just jump into someone’s session. There are lots of things you can do to raise the bar.
Howard: Session hijacking is listed as an attack vector by the Open Web Application Security Project (OWASP). Hackers can get session tokens through man-in-the-middle attacks, cross-site scripting attacks, session sniffing and other attacks other tactics.
David: There are lots of ways to capture credentials. This is just a gentle reminder that in 2023 social engineering will still be a thing. This also gets back to [cybersecurity] fundamentals: Teaching people to be wary of someone you don’t recognize sending you a link or attachment.
Howard: There there are people who want to keep their computers on all day long — people who work from home — so when they wake up in the morning they don’t have to spend time booting the computer and the router. They open the machine and away they go. But that raises the issue of will they have a session cookie problem? Unless of course the night before they go to bed they log out of everything.
David: Even if you use your laptop at work you can put it to sleep, go home, open it back up and your session’s still live because we don’t want to irritate people with having to log in again. Let’s be honest, the hidden double-edged sword of single sign-on (SSO)is it’s actually a hell of a lot harder to log out of stuff. We’ve had these debates with people: ‘I logged out.’ They didn’t. But sometimes when they try and log out because the system’s actually tied to SSO and you log out of every single thing the SSO is logged in you’re not actually logged out. Again, it comes down to that convenience-security equation. I think this is going to be one of those situations that persist for a long time.
Howard: The third item is a story in the Washington political newsletter called The Hill, which talks about how the U.S, and Europe have been helping to shore up cyber defences against Russian online attacks in a number of countries including Ukraine, Estonia, Lithuania, Montenegro and North Macedonia. Some of that work admittedly was done before the Russian invasion of Ukraine began in February. What does this say about cyberwar and the need to have partners?
David: It’s really interesting. The U.S. has this cool concept called defend forward, and I absolutely love it. This is under the U.S. Department of Defense, which just got another $20 billion or a boatload of money from Congress to deploy teams to other countries to fight the adversary there, versus fighting them at home [in the U.S.]. This is smart: It’s not your critical infrastructure or government agency getting burned down. Your team gets in [to small European countries]. You’re helping clean up. You’re learning all the tactics, techniques and protocols [of adversaries] so you can better protect yourself. You’re helping your allies and you’re making the cost of launching attacks higher on Russia. We need a version of defend forward for Canada, and it needs to live outside of CSE [the Communications Security Establishment, which secures federal IT networks] I love the folks at CSE, had some great conversations last week with some of the leadership there. They do a great job defending the government of Canada, and they’re doing an amazing job engaging the Canadian private sector. But they don’t have the right mandate for defend forward from an offensive cyber point of view. That mandate needs to belong to the Canadian Forces. We literally need to call the U.S. and copy their playbook.
Maybe we can’t get $20 billion but we [Canada] lost $27 billion in CERB [the Canada Emergency Response Benefit for COVID-19 relief] so maybe we can find $2.7 million and hire 10 people to Latvia to defend forward or pitch into a NATO team that’s doing this.
Howard: Canada has made some modest contributions to cyber defense in Europe. It said earlier in the war that it offered cyber support to Ukraine. In 2014 Canada contributed $1 million to the NATO Co-operative Cyber Defence Center of Excellence based in Estonia to purchase new hardware for the center’s defense exercises. Canada actually joined that center in 2019. Should Canada be doing more in Europe to help with cyber warfare with smaller countries? Can Canada do more?
David: Should we be doing more? Absolutely. It’s in our interest. This is literally why we put our soldiers in Eastern European countries — we want to make sure that everyone knows it’s not cool to invade them. That’s defending forward physically, so it would make sense to defend forward virtually in cyber. I would rather be helping Estonia or Latvia clean up from an attack than watching another Canadian federal government or provincial government or grocery provider or healthcare system suffer a cyber attack. Let’s keep them busy over in Europe and help protect us by being a little proactive on this. Can we do it? We have brilliant people. CSE is just full of astounding talent. But we’re not scaling. We’ve heard talks about a cyber capability within the Canadian Forces, but it’s not been resourced or stood up. To be honest, the military as a whole right now is just pushed beyond its breaking limit, probably second only to healthcare workers. We’ve got to get serious as a country on this and have to say this is a priority. Here’s a 60-, 90-, 120-day plan to find and recruit experts. Take a page from Latvia and recruit from the private sector to create a cyber reserve. Think a little bit differently and build a team and get it rolling. Get them in some uniforms so that they can have the legal and jurisdictional cover to go defend forward. It’s not like Canada doesn’t have the money.
Howard: Wait a minute: We don’t have the money. We’re $27 billion in the hole for the Canada Emergency Response Benefit, which is financial support for people. The auditor general says some perhaps shouldn’t have gone to certain people.
David: … My point is we lost that money be nice if we could actually invest some money in defending our allies and keeping some of these cyber shenanigans from our shores.
Howard: The last thing I want to quickly look at is a story from Slate that Albanian prosecutors have requested five government it officials be placed under house arrest for failing to update the antivirus software on government computers. They could be imprisoned under the country’s abuse-of-post law for up to seven years What sparked this was a July cyber attack on the country that took down many government online services. Iran has been blamed for that attack. The U.S. says that the attackers got into the Albanian systems through an unpatched Microsoft Sharepoint system where patches had been available since 2019. So, should the government of Canada or the government of the United States imprison government IT workers for not patching computers fast enough?
David: If you want to create an even greater IT employment shortage in North America pass that law. Guess what? No one’s going to be an IT admin anymore. Super bad idea. However, it is interesting to ask who should face penalties when companies do this. In the Canadian Anti-Spam legislation [known as CASL] it says executives and directors of a company can be held liable for failures to do the right thing. This was a really interesting concept because it actually breaks down some of the fundamental [legal] defenses of being an incorporated body. It sent a signal: We hold you responsible if you fail to lead.
My second point is leadership accountability, and I’m saying this as the CEO of a company. Leadership responsibility is a thing that’s fine. We should probably have more of that and in fact, some of the legislation being looked at in Bill C 26 [a proposed bill putting cybersecurity obligations on companies in certain critical sectors] in Canada has elements of that leadership accountability. Not necessarily resulting in jail time: That’s kind of a little extreme. But potentially some financial penalties. It puts the right incentives behind leadership decision-making so that that people do the right thing.
However, if you’re going to do that you need to make sure there’s a robust due diligence defense so we don’t have a mass exodus of people not wanting to be CEOs. They have to be able to prove they did the best they could. This loops back to the FBI story — there’s no such thing as perfect security.
I’m going to take a second here: It’s the end of the year, the end of the quarter. All security vendors are bombarding everybody everywhere with ‘Buy our thing and all your CEO nightmares go away and we’ve got 30, 40, 50 per cent discounts …’ There’s no perfect security There’s no perfect accountability with this stuff. But how do we send the right signals [to organizations]? How do we have checks and balances between penalties and fairness? Throw IT admins in jail because they didn’t patch? What if they weren’t given the time to patch? What if they were specifically told they couldn’t patch that system because the organization couldn’t afford downtime? Accountability in the right places and with the right defences is probably a conversation we need to have, but fairness is important in the equation.