Cyber Security Today, Week in Review for Friday, May 5th, 2023

Welcome to Cyber Security Today. This is the Week in Review edition for the week ending Friday May 5th, 2023. I’m Howard Solomon, contributing reporter on cybersecurity for ITWorldCanada.com and TechNewsday.com in the U.S.

Cyb er Security Today on Amazon Alexa Cyber Security Today on Google Podcasts Subscribe to Cyber Security Today on Apple Podcasts

In a few minutes Terry Cutler of Montreal’s Cyology Labs will be here with thoughts about recent news. But first a look back at a few of the headlines from the past seven days:

A Canadian hospital acknowledged that patient data held by a third party was copied by a hacker. Where was the data? On a test server. Terry and I will wonder why this keeps happening.

We’ll also discuss the theft of information from a decommissioned server belonging to the American Bar Association, whether the FBI needs more money to fight cybercrime and Samsung’s order to employees to stop using generative AI systems like ChatGPT.

Elsewhere in the news, as this podcast was being recorded on Thursday the city of Dallas, Texas was still working to recover from a ransomware attack. The municipal courts were closed. Police and Fire services were unaffected but their websites had to be closed. And the city’s websites were still not fully up.

A New Jersey appeal court has upheld a key cyber insurance decision. The ruling ordered an American insurance company to help pay pharmaceutical firm Merck for over $1 billion in losses it suffered in the 2017 NotPetya cyber attack. The insurance policy had a clause saying it didn’t have to pay for losses suffered from a hostile or war attack. But the appeal court backed a lower court finding that the insurance company didn’t prove the damages were caused by an act of war. An estimated 40,000 Merck computers were infected and had to be replaced. It is believed the NotPetya worm was created by Russia and aimed at Ukraine. However, the malware spread more widely than intended.

The U.S. Federal Trade Commission says Facebook has failed to fully comply with its US$5 billion dollar privacy order three years ago. It alleges parents have been misled about the platform’s ability to control who children talk to through the Facebook Messenger Kids app. It also alleges Facebook misrepresented the access some app developers had to private user data. The regulator is proposing Facebook, Instagram, WhatsApp and Oculus be prohibited from profiting from data it collects from users under the age of 18. Their parent company, Meta, would also be prohibited from releasing new products and services without written confirmation that its privacy program complies with FTC orders. Meta has been given time to reply.

Speaking of Meta, in its quarterly threat report the company said it recently found nearly 10 new malware strains on its platforms, including several being pedaled as ChatGPT browser extensions and productivity tools. Please be careful with what you download.

Two hundred and eighty-eight people have been arrested for allegedly buying and selling drugs on the dark web. Law enforcement agencies in nine countries including the U.S. participated in the operation. It began when German police seized the infrastructure of the Monopoly Market website at the end of 2021, followed by the closing last year of the Hydra online marketplace.

Separately, the U.S., Germany and Austria said they took down the websites of a service called Try2Check. It let crooks check the validity of stolen credit card numbers. The U.S. also announced a Russian resident has been indicted for allegedly operating the service. And the U.S. said it is offering a US$10 million reward for leading to the man’s capture.

(The following transcript is an edited version of part of the discussion. To hear the full conversation play the podcast)

Howard:

Howard: Let’s start with ChatGPT, since everyone’s talking about it. Another firm has told employees not to use ChatGPT and other online AI systems. The company this time is Samsung. The order comes after an employee uploaded source code to the online version of ChatGPT to check for code accuracy. However, that exposed sensitive corporate code to anyone who could figure out how to get it. It seems ChadGPT is causing more problems than solving them.

Terry Cutler: I think it’s significant because it highlights the concerns that organizations have for security and privacy over their data — not only that it can have very strong consequences for the reputation and business operations. As you mentioned there are ways we can just upload source code to Chat GPT and say, ‘Can you find some vulnerabilities in my code, make sure I got everything else secure?’ And it starts spitting out these vulnerabilities. In one scan it started finding zero days that no one knew about. So the power of AI is very, very strong. I think because it’s fairly new people are just curious. They want to see what this thing knows about anything — and it’s causing a lot of risk. The developer of ChatGPT, OpenAI, is taking steps to ensure more security and privacy over the users’ data. For example, OpenAI is launching an incognito mode that allows users to disable their chat history. It’s also working on a version for businesses that wouldn’t share personal information.

Howard: Does restricting [employee] access to generative AI show a lack of confidence in it?

Terry: No. [But] I think we don’t know the dangers of it yet, because it could be used for good, or do a lot of damage … When you start looking at ways that you can find vulnerabilities in other systems using AI it becomes really problematic.

Howard: How insecure is ChatGPT? There’s been a report of a data leak that OpenAI had to close. Should most organizations be telling employees stay away from this?

Terry: It’s a double-edged sword, because nothing is 100 per cent secure. You can make it harder for hackers to get in, but if it’s vulnerable they’re going to gain access to whatever you know.

Howard: As we recorded this podcast on Thursday the White House issued an announcement with a series of measures to address the challenges of AI systems. The U.S. government will introduce policies to shape how federal agencies procure and use AI systems. Also on Thursday the U.K. asked regulators there to think about how AI systems can be used safely with transparency and fairness This raises the question of the need for regulation of AI. There’s legislation right now before Parliament in Canada. The Americans are thinking about it.

Terry: I think they’re venturing into an area where they’re not hundred percent sure of the power of AI. As long as we can put in laws that will help govern data privacy, security transparency, accountability and fairness, and promote innovation. It’s a balancing act, because we’re already seeing evidence that AI is being programmed to lie with deep fakes, where they [U.S. Republicans] are creating videos that looks like they’re ready to go to war and the footage was created by AI.

Howard: Meanwhile crooks are taking advantage of the hunger for people looking for information on AI. Meta said this week that since March it’s blocked more than a thousand malicious links related to ChatGPT scams like fake browser extensions and phony mobile apps.

Terry: We’re also seeing scams where they’re able to get a hold of your child’s voice and deep fake it so they were able to call the parents and it sounds like they’re sniffling and crying saying they’ve been kidnapped. It’s extortion.

Howard: That was very alarming. Coincidentally, a friend of my family’s had that problem over a year ago — and that wasn’t with an AI-generated fake voice. Someone called her and said ‘Grandma, Grandma please help me. I’ve been in an accident. I need money.’ And it sounded like her granddaughter — only she knew it wasn’t because her granddaughter wouldn’t have called her ‘grandmother.’ She had a different name for her. That’s how she knew it was a scam. AI systems that can fool around with voice are only going to make this problem worse.

Terry: Here’s another one. I think it was on CNN where the host had a conversation with himself. AI created exact voice tonality. It was like having a real conversation with yourself in a mirror. I think what’s going to happen is we’re going to see a lot more scams coming out as this gets perfected by criminals. Could you imagine the CEO or CFO scams that are going to happen: [An employee gets a phone call] ‘Hey can you wire $50,000 to this account and keep it hush-hush because it’s a top-secret project.’

Howard: The counter to that is having policies for employees that say it’s not appropriate to accept a phone call [with financial instructions] without some sort of confirmation. When somebody asks for transfers of money [by phone] there has to be verification. That’s that’s a business process.

Terry: Absolutely. We’re going to have to start adding to that list of things to check to make sure you don’t fall victim to cyber criminals. Like I said, scams are evolving so quickly now … But all these scammers like to call you at five o’clock on a Friday when you want to get out of the office and just get whatever done — and [they hope] you’ll make mistakes.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada
Howard Solomon
Howard Solomon
Currently a freelance writer, I'm the former editor of ITWorldCanada.com and Computing Canada. An IT journalist since 1997, I've written for several of ITWC's sister publications including ITBusiness.ca and Computer Dealer News. Before that I was a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times. I can be reached at hsolomon [@] soloreporter.com

Sponsored By:

Cyber Security Today Podcast