Welcome to Cyber Security Today. This is the Week in Review for the week ending Friday, October 20th, 2023. I’m Howard Solomon, contributing reporter on cybersecurity for ITWorldCanada.com and TechNewsday.com in the U.S.
In a few minutes Terry Cutler of Cyology Labs will be here for a discussion. But first a quick review of headlines from the past seven days:
In this morning’s podcast I told listeners about white hat hackers from Ukraine who took down servers behind the Trigona ransomware. Another ransomware group apparently gone under. The Record says it has been told by Europol that the Ragnar Locker ransomware operation has been hit by several law enforcement agencies.
Also this week, the U.S. seized 17 website domains that hackers from North Korea used to defraud companies, get around international sanctions and help fund North Korea’s weapons programs. The scheme partly involved North Korean IT workers sent to China and Russia to get hired by U.S. companies. Their salaries were sent back to North Korea and used by the government for weapon development.
An IT employee error is behind the theft of over 100,000 pieces of customer data stolen from Casio, a well-known consumer electronics manufacturer. The company said this week that some of the network security settings in its application development environment were disabled. That allowed a hacker to get into Casio’s ClassPad education web application. It’s a site for those who want to learn math using Casio calculators. The attacker copied customer names, email addresses and product purchasing information. With that data a hacker could create a phishing message that looks like it came from Casio.
Intelligence directors from the Five Eyes co-operative – including Canada and the U.S. — this week warned the private sector that foreign countries are after their intellectual property. The group released a guide called the Five Principles of Secure Innovation to explain how businesses can protect against security threats.
More data allegedly stolen from the DNA testing company 23andM3 has appeared on a cybercrime forum. The first leak had 1 million records of people who used the service. The latest leak has 4.1 million records.
A former U.S. Navy IT manager has been sentenced to five years and five months in prison for stealing data of 9,000 people held by a private company. The accused told the company he needed to access the data to perform background checks for the Navy. Instead the data was sold on the dark web for US$160,000 in Bitcoin.
BHI Energy of Georgia, a division of Westinghouse, is notifying over 91,000 people in the U.S. their data was stolen in an incident in June. The company isn’t using the word ransomware. But it is saying data was encrypted.
Finally, the Fauquier County Public Schools of Pennsylvania is notifying almost 14,000 students and employees their personal data may have been stolen in a ransomware attack last month.
(The following is a partial transcript of the discussion. To hear the full conversation play the podcast)
Howard: This week technology advisor Bernard Marr penned a column in Forbes.com looking ahead at the 10 biggest cybersecurity trends for 2024. They include the shortage of cybersecurity talent, an increase in attacks on IoT devices like industrial controllers, cyber warfare and the disruption of elections, and an increase in cybersecurity regulation.
Terry and I thought we should delve into others on the list.
We’ll start with the increasing use of Generative AI by attackers and defenders. It’s no surprise that crooks will leap onto the newest technologies, hoping to get ahead of defenders. What are you seeing and hearing about how Generative AI is being abused by threat actors?
Terry Cutler: We’re going to start seeing more deep fakes. This is where the AI can start impersonating public figures, maybe manipulating voice or video. Kind of like a scam where you get a phone call that sounds like your child’s been kidnapped, and you’re asked to pay a ransom. We’re going to start seeing more of that.
One of things that concerns me though is around AI-powered malware. This is a new generation of malware that can potentially adapt and learn from the defense mechanisms you have in place. It can actually change its behavior automatically and avoid detection to make it harder for traditional security tools to catch these things. That’s why it’s so important that you have proper detection technology in place. I would strongly suggest that business owners sit down with their IT department or their MSP [managed service provider] to really deep dive in to see if they’re really protected. Remember IT folks are usually just generalists. Think of them as your family doctor: Would you ever ask your family doctor to perform laser eye surgery on you? Most people would say no. So it’s really important that cyber experts be paired up with your IT team to help you find out if you have any weak points. For example, I ran into a couple of situations this month where MSPs were providing wrong advice to business owners — like telling the customer they don’t need EDR [endpoint detection and response] technology, and then the customer’s Exchange server gets infected and the entire the entire network gets ransomed.
We’re seeing situations where the MSP is not customizing the security solutions [for customers]. They think one solution fits all …
We’re also going to start seeing [AI-driven] misinformation. And I think there could also be data poisoning at some point, where the [defensive] AI learning systems will be infected with misinformation. I think there’s also going to be automated hacking: Ai is going understand new attack vectors and possibly find ways to get around your system.
Howard: Everybody’s heard of ChatGPT. Security researchers have written of discovering Generative AI systems expressly written for threat actors such as WormGPT, which is a clone of ChatGpt that’s trained on malware data; FraudGPT; DarkBART, which is a version of Google’s conversational AI called Bard, and others. So threat actors are are arming up with AI.
Terry: Let’s take an example of FraudGPT. It’s dangerous because it’s designed to facilitate financial fraud. It may create realistic-sounding phishing emails or maybe fake financial documents. I think this is going be very, very lucrative for cybercriminals because consumers are not up to date with the latest scams. So they definitely need to get more security awareness training. The more you can share knowledge about these scams that are coming out and how to avoid them the better. That’s why there needs a big collaborative approach to this.