Cyber Security Today, Week in Review for week ending Friday, Feb. 9, 2024

Welcome to Cyber Security Today. This is the Week in Review for the week ending Friday, Feb. 9th, 2024. I’m Howard Solomon, contributing reporter on cybersecurity for ITWorldCanada.com and TechNewsday.com in the U.S.

Cyb er Security Today on Amazon Alexa Cyber Security Today on Google Podcasts Subscribe to Cyber Security Today on Apple Podcasts

In few minutes Terry Cutler of Montreal’s Cyology Labs will be here to discuss recent news. That includes how a deepfake video conference call fooled an employee of a Hong Kong company into wiring US$25 million to crooks, why the U.S. Federal Trade Commission called the cybersecurity of a company “shoddy,” details about a hack at Cloudflare and promises by some countries to get tougher on the abuse of commercial spyware.

Before we get to to discussion I want to do a quick review of other headlines this week:

Remember that deepfake video conference call that I said Terry and I will talk about? One of the ways fake content can be spotted is if it doesn’t have a label or watermark attesting to its legitimacy. There’s a group of tech companies called the Coalition for Content Provenance and Authentication that’s trying to do that. In the latest news Google joined the coalition this week. The goal is to create tamper-resistant metadata that can be attached to any digital content — a photo, a video or an audio file — that shows how and when the content was created or modified.

Remember I said in the discussion Terry and I will also talk about countries promising to take action against the abuse of commercial spyware? The spyware comes from developers who find holes in applications and exploit them. How big a problem is it? Google issued a report this week saying commercial spyware is behind half of the known zero-day exploits targeting Google products and Android devices.

Separately, Google said it is about to start a pilot project in Singapore that blocks the loading of financial fraud apps on Android devices. If it’s successful the effort could spread to other jurisdictions.

A New York City medical centre will pay US$4.75 million to settle allegations by the U.S. Department of Health and Human Services that potential data security failures led to an employee stealing and selling health information on 12,000 patients. The hospital didn’t know about the theft until alerted by police. Problems included failing to monitor and safeguard the hospital’s health information system.

Two big data breach notifications in the U.S. took place this week: Verizon Communications said a staff member stole the personal information of over 63,000 employees last September. And Bayer Heritage Federal Credit Union of West Virginia said personal information on just over 61,000 customers was taken in a cyber attack last fall.

Finally, JetBrains, Cisco Systems, Fortinet and VMware this week released security fixes. JetBrains says there is a critical vulnerability in TeamCity server that needs to be patched. The Cisco patches fix critical holes in Cisco’s secure remote access Expressway Series. Fortinet released updates for its FortSIEM system event manager to plug holes. And VMware released patches for Aria Operations for Networks to close five vulnerabilities.

(The following is a transcript of the first of four topics discussed. To hear the full conversation play the podcast)

Howard: Topic One: An employee was recently suckered into transferring millions to crooks based on a sophisticated deepfake video call.

Hong Kong police say the employee, who worked in the finance department of an unnamed multinational company, was tricked into sending $25 million to crooks by what appeared to be the company’s chief financial officer on a video conference call. The employee got an email message asking them to get on the call, which was about a secret transaction. And there on the video call was the CFO and other people the staffer recognized. So he followed instructions.

This is an example of the sophistication of fake video calls, perhaps helped with artificial intelligence. But the big question is did this company have no business process rules? Like “transfers over $1 million must have double authorization?”

Terry Cutler: This is going to require a multifaceted approach. If you’re dealing with a CFO used to transferring this large amount of money it’s going to be a bit more tricky than just saying, ‘Oh, they didn’t have the proper processes.’ But they’re going to start bringing in more AI-based detection and prevention solutions. What’s going to be happening now is because these deepfakes are so difficult to find it’s going to be having like a detection system on steroids. It’s going to come down to ‘My AI bot just beat your AI bot.’ That’s going to get really tricky. You think humans are eventually going to lose control because they can’t keep up with what’s going on behind the scenes with AI. But have to start looking at something more — maybe more advanced authentication and verification methods. For example, signing their payments with digital signature algorithms either from RSA or ECDSA, which is the elliptic curve digital signature algorithm. These are all tactics that can help.

As for awareness training, we’re seeing a big problem because users are so used to templated training which is very, very, very boring. Employees are not engaging with it. They don’t see a need for cyber security because it doesn’t concern them, but they need to understand that this is everyone’s responsibility. So we need to have other types of training that’s more edutainment. That will help educate them on why it’s so important to stay up-to-date with cyber security, and not just that because they’re a victim of a scam. We [also] need proper incident response plans for what happens when this type of thing goes wrong, especially around deepfakes. It’s getting so difficult to spot them And, of course, they should be sharing information of how this [scam] occurred so other companies don’t fall victim.

Howard: One tip for the employee was the email that that invited him onto this video conference call was, ‘This is a secret transaction.’ In awareness training one of the things you’re warned is to look for little signs like, ‘Please treat this as confidential’ or ‘This is a matter of urgency and you’ve got to transfer this money quickly.’ To be fair to the employee, according to the police, initially was suspicious. But all of the people on this video call looked real and looked like they were people he knew.

Terry: That’s what’s going to be tricky. Imagine you wake up one morning and your bank account is drained and you call up your bank and it says this was an authorized transaction. Your your colleagues were on the call. It was voice-verified. It was email signature verified. Everything was verified — and you’re left with an empty bank account. It’s very very scary what’s coming up.

Howard: I appreciate that this was a big company and presumably was used to transferring large amounts of money — and I assume that the employee was someone who had authorization to transfer large amounts of money. But $25 million is big cash. You need verification controls.

Terry: I agree, and I think this is something they’re going to put in place now. We’re going to have multiple members [of the company] that have to sign off on this [large transfers]. More than just dual authentication. Maybe it’s going to be better to have other people that are responsible for the transaction to be actually on the call. as well as a separate call to make sure it was really them — implement a hierarchical approval workflow. Maybe have some independent channels that can verify via a phone call. Maybe also set up transaction limits.

I’ll give you an example. One of my friends was defrauded of $445,000 from his company. Originally he was never wiring more than $50,000. But when he got hacked the scammers took control of his bank account and started wiring large amounts to Mexico. The banks never stepped in because his accounts were preauthorized for half a million dollars. Because the the threshold was set to to that the [crooks’] transaction went through. So I think they [banks] are going to start looking at transaction limits before giving approvals.

Howard: And as you said, this incident speaks to the sophistication of fake voice and video these days.

Terry: This is really scary stuff, because it’s very difficult to know if it’s fake. We’re going to need help from third-party vendors, maybe some telecoms that can trace the signature see where if it came from.

Howard: In related news, this week Meta announced that it will soon label all AI-generated images that are posted on Facebook and Instagram to help people be aware of fake pictures. It won’t matter whether the images were created with Meta’s AI tool or another company’s tool. There will be some sort of label or watermark. Right now Meta marks photos on Facebook and Instagram that use its tool. It says beside the picture ‘Imagined with AI.’ Hopefully there will soon be a capability to tag not only AI-generated still photos but also videos and audio files. Meta says that if it determines that a digitally created or altered image video or audio has a high risk of deceiving the public on a matter of importance the label may be more prominent than the label that it gives to other images. This watermarking wouldn’t have helped in the deepfake video call case that we just discussed, because that was a private call. But it shows that industry players are thinking about this and trying to find solutions.

Terry: It’s going to be interesting to see, because AI is heavily used for marketing as well. And since since the rise of ChatGPT we see all these so-called marketers that are coming in with new methods to sell their products. There’s a heavy reliance on AI. It’ll be interesting to see social media platforms saying, ‘This was created with ChatGPT and is not original.’

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada
Howard Solomon
Howard Solomon
Currently a freelance writer, I'm the former editor of ITWorldCanada.com and Computing Canada. An IT journalist since 1997, I've written for several of ITWC's sister publications including ITBusiness.ca and Computer Dealer News. Before that I was a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times. I can be reached at hsolomon [@] soloreporter.com

Sponsored By:

Cyber Security Today Podcast