Welcome to Cyber Security Today. This is the Week in Review for the week ending Friday, September 15th, 2023. I’m Howard Solomon, contributing reporter on cybersecurity for ITWorldCanada.com and TechNewsday.com in the U.S.
In a few minutes David Shipley of Beauceron Security will be here to comment on recent news. But first a look at some of the headlines from the past seven days:
Microsoft explained how the hack of one of its software developers led to one of the most amazing breaches of email security by a China-based threat actor. David and I will look at that.
Also in the news, as of Thursday afternoon, when this podcast was recorded, several security researchers were concluding the cyber attack on MGM Resorts was ransomware. The malware respository called VX-Underground claims a hacker pretending to be an employee called the company’s help desk and persuaded a person to give them network access. [UPDATE: AlphV ransomware gang takes credit]
Separately, Ceasar’s Entertainment, like MGM a big Las Vegas hotel and casino chain, reportedly paid a ransom to avoid the leak of customer data in a recent hack.
Aerospace and manufacturing giant Honeywell International is notifying over 118,00 current and former employees that some of their personal information was copied in the hack of the company’s MOVEit file server. The data related to a pension plan Honeywell was administering. Information copied included names and Social Security numbers.
Still with MOVEit victims, Aetna Life Insurance is notifying an unspecified number of people that their personal information was copied when a hacker got into the MOVEit server of a data processor called Pension Benefit Information. As I’ve described before, many American organizations use PBI to verify pension subscriber information. A considerable number of PBI customers say they were victims of the hack of that firm’s MOVEit system.
Texas Medical Liability Trust, which operates several medical insurance firms, is notifying almost 60,000 people about a data theft. A hacker got into its IT system last October and accessed names, Social Security numbers, tax information numbers, state ID or drivers licence information and financial account information.
Stolen login credentials are still a major factor in successful cyber attacks. IBM says the use of valid credentials was the most common initial access vector in 36 per cent of cloud security incidents it examined in a 12-month period that ended in June.
And threat actors are increasingly going after companies that use MacIntosh computers, creating new malware to get sensitive data. The latest example was found by researchers at SentinelOne. They call it MetaStealer. It usually gets distributed as an email attachment. In one case a hacker posed as a design client and sent the victim a password-protected infected file that carried MetaStealer.
(The following is a transcript of the first of the four news items discussed. To hear the full conversation play the podcast)
Howard: Let’s start with Microsoft’s explanation of the astonishing use of forged Exchange Online authentication tokens by a China-based threat actor. They used the tokens to get at email of 25 organizations, including government agencies. Microsoft revealed this attack in July. It was discovered in May. Initially, Microsoft said the attacker was somehow able to get an inactive Microsoft account consumer signing key because of a validation error in the company’s code. Then the attacker used that signing key to forge authentication tokens for accessing Azure Active Directory enteprise. But how did the attacker get the key? The answer, Microsoft said last week, starts with the crash of one of its consumer signing systems in April, 2021. A snapshot, or crash dump, was then made. Crash dumps shouldn’t include sensitive information, like a signing key. But this one did. That crash dump was moved from Microsoft’s isolated production network to its debugging environment on Microsoft’s internet-connected corporate network.
Sometime after this event the threat actor compromised a Microsoft engineer’s account, which had access to the debugging environment. The hacker probably got into the debugging environment, rummaged around and found the signing key. This was a consumer key. However, because of a flaw in Microsoft API libraries this consumer key could be used to sign a security token for an enterprise mail system.
First, before we get into the nitty-gritty of this, one implication of the analysis is the threat actor might have been rummaging around some organization’s email for two years.
David Shipley: That’s an eyebrow-raiser, and a point that the security firm Wiz makes in their excellent analysis of Microsoft’s response. It means that this threat actor could have been going around Microsoft customers for some time before it got its hand caught in the U.S. government cookie jar — and that’s unsettling. This is a great example of the limits of security investigations in the face of a long-term cyber operation. Nation states have the advantage of time and if they play things right, log limits are their friends. The old, ‘Perhaps it didn’t happen,’ is an asset for attackers and a liability for defenders.
Howard: What did you think when you read Microsoft’s explanation about what happened?
David: First, how each problem on its own, while significant, is understandable but how phenomenally lucky –or skilled — the Chinese hacking team was to chain all of these things together at the right time. Which ties back into the multi-year timeframe [issue]. Let’s call this one Shipley’s Law: ‘Given enough time a persistent threat actor’s odds of causing havoc increases to near certainty.’ In this case, I’d say if an attacker is in your network for two plus years even if you’re the best in the world you’re screwed.
The second major thought is related to a point made by the Risky Business folks in this week’s newsletter and what this [incident] says about security culture in Microsoft. It’s an interesting perspective. Often when we hear security culture talked about today it’s focused on things like cyber security awareness education programs. But this is security awareness in practice — how organizations implement the knowledge that should be available to them. More specifically, how they prioritize security in their business. A quote that stood out for me was one by Vaughan Shanks CEO of Cydarm and a computer scientist who has worked with both the Australian Signals Directorate, the ASD, and the U.S. NSA. He described these lapses as “flabbergasting.”
The [Microsoft] security culture question is a valid example, Risky Business noted, that combining two separate consumer and enterprise [key authentication] systems into one while convenient and maybe having some business advantages, without reviewing access boundaries around this control is a pretty significant screw up — and also not enforcing key expirations. Ouch. And the harder, but still important, mistake of making sure sensitive data like keys aren’t included in security dumps all raise interesting questions about security culture.
Howard: Certainly one of the questions is why didn’t Microsoft’s credential scanning methods catch that a key was in the debugging environment?
David: I think this does go back to the priority. How much of an effort they were putting on protecting the crown jewels? This also leads to a rather scathing quote that I thought was worth noting from the Risky Business podcast newsletter: These are decision-making failures that simply wouldn’t happen in organization that emphasizes they actually care about security. This breach didn’t happen because of a series of amazing coincidences. It happened because Microsoft security culture is not up to scratch. That’s a pretty damn harsh assessment. But the key question is whether it’s a fair assessment. Ultimately it’s going to be up to Microsoft customers, who have the final say on this particular chapter. Do they accept that Microsoft has learned painful lessons from this and improved? Or do they get increasingly concerned about the underlying cultural non-technical issues and does that change anything about their buying decisions?
Howard: Who thinks about crash dumps as a security risk?
David: At this scale, with these kinds of firms playing at the most elite game of the security and hacking world, you have to now. There’s an old quote that your threat model is not my threat model. Do I think that worrying about crash dumps as a security risk is something that every firm has to consider? Probably not. But if you are Microsoft, Apple, Google, ya better be.
Howard: The other thing is Microsoft hasn’t explained how its employee’s computer was compromised.
David: No it hasn’t. And that raises some interesting questions, like was this employee’s device compromised for significantly longer and used for other incidents? We’ll probably never know because the logs probably don’t exist. It also could be super awkward depending if they [the attackers] were relying on [Microsoft’s] security tools. But it does raise some interesting questions that that an application developer with this high level of access got pwned and that all of the things that should have prevented persistence didn’t work.
Howard: What are the lessons that application developers or IT managers can learn from this incident?
David: I don’t think this is so much about the application developers. I do think it’s about executives, product managers, CSOs and IT managers. I think this should prompt a hard conversation about the importance of regulating these too-big-to-fail cloud providers and infrastructure providers. Arguably they are now as important to the economy if not more important than the big banks are. I used to think that the nature of these businesses and the fact that their reputation was so critical to selling their cloud services, products, etc. would drive a positive security culture that would root out these kinds of issues. I’m not nearly so sure about this thesis now as I am after this incident. Maybe we do need some better oversights of these kinds of deep issues.