Cyber Security Today, Week in Review for the week ending Friday, July 28, 2023

Welcome to Cyber Security Today. This is the Week in Review for the week ending Friday, July 28th, 2023. I’m Howard Solomon, contributing reporter on cybersecurity for ITWorldCanada.com and TechNewsday.com in the U.S.

Cyb er Security Today on Amazon Alexa Cyber Security Today on Google Podcasts Subscribe to Cyber Security Today on Apple Podcasts

In a few minutes Jim Love, CIO of IT World Canada, will be here to comment on recent news. But first a look at some of the headlines from the past seven days.

Seven leading American AI companies including OpenAI and Google promised their AI systems will undergo outside safety testing before they are publicly released. Then they formed a new industry group to promote best practices. Jim will have thoughts on those announcements.

We’ll also talk about new data breach disclosure regulations for publicly traded companies in the U.S., and a survey of IT and security leaders with worrisome results.

And Jim will also talk about IT World Canada’s annual Top Canadian Women in Cybersecurity event this week.

Also this week, the extent of damage that information-stealing malware like Redline can do was exposed. According to researchers at Flare Systems, hundreds of thousands of logs with credentials for accessing business applications that were vacuumed up by this type of malware are being sold by crooks.

The average cost of a data breach continues to rise, according to IBM. Globally, companies can expect to pay as much as US$4.4 million for a breach. That’s up 2.3 per cent over last year.

Threat actors behind a remote access trojan toolkit that researchers call Decoy Dog have been trying to protect themselves after an April report by Infoblox. In an updated report the company says the threat actors have taken down some of the named servers in the April report, but have also made other moves to ensure victims’ systems remain compromised. This malware uses DNS for communications, so IT departments are warned their DNS servers have to be closely monitored.

A Russian court has convicted the co-founder of cybersecurity threat intelligence firm Group-IB of high treason and sentenced him to 14 years in prison. Ilya Sachkov, who spent almost two years in pre-trial detention, was accused of passing information in 2011 to foreign intelligence about the Russian-based threat group dubbed Fancy Bear. I regularly carry reports of Group-IB threat intelligence.

Finally, there’s a new artificial intelligence tool available to cybercrooks. It’s called FraudGPT. Access to it costs $200 a month, or $1,700 a year. The person or persons behind it say it can be used for writing malicious code, creating malware and finding vulnerabilities. Interestingly, the person advertising the tool reportedly has the alias CanadianKingpin. Another AI tool called WormGPT is also being marketed at threat actors.

(The following transcript has been edited for clarity. Also, it only covers the first of four topics we discussed. To hear the full conversation play the podcast)

Howard: We’ll start with the latest happenings in artificial intelligence. There was a big splash at the White House a week ago today when President Biden said seven leading AI companies including OpenAI, the company behind ChatGPT, promised to safeguard their AI systems against cyber threats, to manage security risks and to have independent tests on their systems before being released. Is this substantive or Window-dressing?

Jim Love: I honestly believe these people are trying, but is it the right direction? Everybody’s worried about the AI overlords going to kill us all or enslave us all. I will say what that doesn’t kill me makes me stronger. There are people seriously putting money into making AI safer. I think people should realize that OpenAI has declared they’re going to put up 20 per cent of their computing resources and put some of their best people on what they’re calling alignment, or the ability to keep AI in check. How? I don’t know if anybody really knows. The bigger problem is AI is here. It’s being used in cybercrime. That ship has sailed. You did a story on WormGPT [for cyber criminals] … It’s done with an earlier version [of AI], but it’s for rent. It’s pretty damn good and it’s on the internet for people to design attacks. These things are good enough right now. They will develop.

There’s an upcoming podcast I’m doing on deep fakes, particularly voice fakes, and that technology is out there now. Last weekend I did a story on one of the key guys from OpenAI who put out a version [of a chatbot] he calls Baby LLaMA [based on the LLaMA large language AI model] ..on a regular laptop … They’re tiny — a dozen neurons. In their system versus the 175,000,000,000 that ChatGPT runs on … A lot of stuff is open source the cost of computing’s coming down.

Do you remember when we started to see that people were renting out ransomware kits … and suddenly attacking everybody? We’re coming back to that, only this time with AI. That’s my big worry.

Howard: Is the voluntary agreement by companies an attempt to shape future American AI regulation? I note that this week, many of the same tech companies also announced a new group called the Frontier Model Forum. That group’s goal is to push best practices for emerging AI models.

Jim: I think that’s a good thing. These things are all interrelated. And it’s so funny to see competitors working together. But in reality, a lot of the stuff that’s there is published as research and is out there in the open. So the sharing is not that unusual. It makes perfect sense for people to do that. But I think there’s also a PR element. I mean, the big fear thing is out there with AI, so they need to show they’re doing something or there’s going to be real pressure for strong legislation. And I wouldn’t trust that if I were them. The people who do the regulation legislation don’t know a whole heck of a lot about AI.

Howard: Also this week, Canadian AI researcher Yoshua Bengio pushed for regulation in the United States, telling a U.S. Senate committee hearing that AI systems capable of human-level intelligence are only a few years away. They could outpace our ability to control their risks, he said. Is this scaremongering?

Jim: I don’t know if it’s scaremongering or the answer is, nobody knows. I’ve listened to very intelligent people who said things like, ‘This is the end of humanity,’ and smart people who say, It’s overblown.’ Somewhere in the middle is Geoffrey Hinton from U of T [Toronto] who was at Google. I’m more inclined to listen to him. He says, ‘These things are going be a whole lot smarter than us very quickly. We need to beware, but we don’t need to be in a panic. We need to do something about it.’ And I think that these volunteer committees are in the process of maybe doing something about that. The scary thing with me is when it’s in private companies. And I’m not a socialist. I am an absolute capitalist, but there are things that private companies do that they do for profit. And that’s what they’re really good at. I don’t think that legislation and regulation of something as important as AI should be subject to a profit motive. That makes no sense to me. So I’m happy for the volunteerism in this, but we need intelligent politicians in the Canadian sense — not what they do in the American sense of yelling at each other — but getting smart people together to advise politicians on what we have to do. And we need to do it quickly. Like I said earlier, on cybersecurity, the horse has left the barn. That’s over. But the control of AI in the bigger sense, as it gets smarter and smarter, we still have a chance.

Howard: What about Canada’s proposed AI legislation? That was introduced a year ago. Parliament still hasn’t started committee hearings on it. Not only that, it’s tied up as part of proposed privacy law reform. A lot of experts say we need to separate those two pieces of legislation. The AI legislation, among other things, would force businesses deploying high-impact AI technologies to use them responsibly, and would also appoint an AI data commissioner to oversee so far unspecified regulations. The commissioner would be part of the Innovation department and not independent. Does Canada need to act faster on AI regulation?

Jim: God, yes. When I say that our system of listening to smart people has historically been good, think of the Privacy Commissioner of Ontario [who created] Privacy by Design. We’ve done fantastic things when we listen to smart people who know what they’re talking about. But it hasn’t been easy. Getting politicians to move, getting the system to move is the Canadian problem. And we need to solve that because we’ve traditionally been able to coast and pick up European legislation or American legislation in some cases. And I don’t know if that’s an option to us anymore. We need to be doing something, but a lot faster than we’re doing it.

Howard: The EU is already looking at a draft law that would ban high-risk AI systems such as facial recognition in public spaces, predictive policing tools, and systems that rate people on their behavior.

Jim: The EU is so far ahead of us on so many concepts right now. For a democracy that’s not supposed to be functional, especially after Brexit, they are doing a heck of a job. I think at times they go over the line, and fortunately they seem to only pick on big tech for the most part. But there’s some restrictive things that I would rather not have holding back commercial enterprises. But for the most part, they are so far advanced in terms of their action, their understanding of the issues, and their ability to articulate something. You know, just the right to forget is way over as an issue in Europe now. Here, we’re still debating it. Maybe there’s a couple of American states [taking action]. It’s the same in cybersecurity, same in privacy. They’re so far ahead of us, I can’t even answer. And that’s a danger for Canada, because we, you know, as much as we’re dependent on U.S. trade. We need trade with Europe. It’s an expanding and growing economy, and we have a lot in common with those countries. And so keeping on par with them and being able to stay up to date with them is, I think, it’s an important competitive thing for the Canadian economy. And we’re not doing a good enough job at it.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada
Howard Solomon
Howard Solomon
Currently a freelance writer, I'm the former editor of ITWorldCanada.com and Computing Canada. An IT journalist since 1997, I've written for several of ITWC's sister publications including ITBusiness.ca and Computer Dealer News. Before that I was a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times. I can be reached at hsolomon [@] soloreporter.com

Sponsored By:

Cyber Security Today Podcast