Cyber Security Today, Week in Review for the week ending March 3, 2023

Welcome to Cyber Security Today. This is the Week in Review podcast for the week ending Friday, March 3rd, 2023. From Toronto, I’m Howard Solomon, contributing reporter on cybersecurity for ITWorldCanada.com and TechNewsday.com in the U.S.

Cyb er Security Today on Amazon Alexa Cyber Security Today on Google Podcasts Subscribe to Cyber Security Today on Apple Podcasts

In a few minutes University of Calgary professor Tom Keenan will be here to discuss the security implications of artificial intelligence and ChatGPT. But first a look at some of the headlines from the past seven days:

The White House issued a new National Cybersecurity Strategy that calls on IT companies and providers to take more responsibility for poorly-written applications and poorly secured services. If Congress agrees some critical infrastructure providers will face mandatory minimum cybersecurity obligations.

Password management provider LastPass has admitted that part of last August’s breach of security controls included hackers compromising the home computer of one of the company’s developers, leading to a second data theft.

Canada’s Indigo Books isn’t the only book retailer that’s been hit recently with a cyber attack. In a brief statement filed with the London Stock Exchange, Britain’s WH Smith said it suffered a cybersecurity incident that resulted in access to current and former employee data. Indigo was hit by ransomware, with employee data being stolen by the LockBit gang.

Police in Holland have now acknowledged arresting three men in January on allegations of computer theft, extortion and money laundering. Police believe thousands of companies in several countries were victims of the gang. It is alleged they stole a huge amount of personal information including dates of birth, citizen service numbers, passport numbers and bank account numbers. One of the alleged attackers worked at the Dutch Institute for Vulnerability Disclosure.

GitHub’s secrets scanning service can now be formally used by developers to screen many public code repositories. Until now it’s been a beta service. The secrets it searches for are things like account passwords and authentication tokens that developers add to their code repositories and forget to delete. GitHub secrets scanning works with more than 100 service providers in the GitHub partner program.

Poorly-protected deployments of Redis servers are being hit with a new cryptojacking campaign. Researchers at Cado Security say Redis can be forced to save a database file that is used for executing commands. One is to download a crypto miner. Make sure your Redis servers are locked down.

And the websites of nine hospitals in Denmark went offline last weekend following distributed denial-of-service (DDoS) attacks from a group calling itself Anonymous Sudan. According to the cybersecurity news site The Record, Anonymous Sudan claimed on the Telegram messaging service the attacks were “due to Quran burnings,” a reference to an incident in Stockholm in which the holy book was set alight in front of the Turkish embassy by a man. Hospital operations weren’t affected.

(The following transcript has been edited for clarity and length. To hear the full conversation play the podcast)

Howard: Tom taught what is believed to have been the first university course in computer security in 1974. That’s when only governments, banks, insurance companies and airlines had computers. He’s the author of a book on privacy and capitalism called Technocreep. An adjunct professor in computer science at the University Of Calgary he’s also affiliated with the university’s school of architecture where he keeps an eye on technology and smart communities and Professor Keenan is also a fellow of the Canadian Global Affairs Institute l

Last month he testified before the House of Commons defense committee looking into cyber security and cyber warfare where he spoke on artificial intelligence and ChatGPT, and that’s why he’s my guest here this week.

You worry about the dark side of artificial intelligence. Why?

Tom Keenan: I always worry when everybody loves something, and since last November everybody’s been into ChatGPT … That’s the problem: We haven’t really been very critical about it. Many years ago I was teaching high school students to write neural networks, and I gave them a project: Come up with something good. Of course, being teenagers they wanted to get hands-on with each other so they decided to measure each other’s bodies. They found out that the hip-to-waist ratio is a good predictor of whether you’re male or female. At the end of the program they had kind of a science fair and they showed this program off, measuring members of the public. This rather portly gentleman who was from one of the sponsoring companies came by and said, ‘What am I.” And they said, ‘Sir with 84 per cent certainty you’re female.’ I love that because that shows what AI is: Ai is a game of guessing and probability. I go to ChatGPT and it tells me things like it’s a fact.

I’m working with a lawyer as an expert witness. I told ChatGPT to give me a legal precedent.

And it gave me a Supreme Court of Canada judgment that doesn’t exist. It made it up to cover its tracks. We have a piece of technology that can lie that can be fed bad information and they can’t explain it and that pretends it’s right all the time. That’s a recipe for disaster.

Howard: You told parliamentarians there are three things about AI that bother you.

Tom: One of them is this illusion of certainty. They’ll fall in love with it, they’ll start using it for all kinds of things and not think about the consequences. ChatGPT. Is trained on a wide variety of sources. But the version that’s available to the public now only knows about things through to 2021 … Also, the training data can be biased, as we found with facial recognition. It can favour certain groups. And AI could even be actively poisoned. Somebody who wanted to mislead AI could feed it a lot of bad information and it would spit back bad results.

The second thing is the lack of ethics. Six years ago Microsoft infamously created a bot called Tay that conversed with the public. After a while it was spouting Nazi ideas, foul language. It referred to feminism as a cult. Microsoft lifted the cover to see how this all happened and realized it was just learning from the people who interacted with it. The people who had time to sit around talking to Tay had these kinds of ideas and it just picked up on them. So there’s no ethical oversight for AI.

And the third thing would be the whole idea of consciously doing malicious things to the AI. There’s a woman for years has been trying to rewrite the Wikipedia entry on the Nazis to paint them in a more favorable light. And you may remember in 2003 a whole bunch of Democratic supporters [went online and] linked the phrase ‘miserable failure’ to the [online] Presidential biography of George W Bush, so when you Googled ‘miserable failure’ his picture came up. Twenty years later who knows what they could do to mislead AI?

Howard: You think intelligence agencies right now are busy trying to poison the wells of open-source data.

Tom: Absolutely. First of all most of the really interesting stuff in [government] intelligence is not open source. So if you train the thing on stuff that’s in the New York Times, that you can get from Google, that’s on people’s web pages, you’re only seeing a little fraction of it. The really good stuff is within the [government] secret or a top-secret area. So the first thing that the national defense people would have to do [to protect government AI systems] is creating a kind of private version, almost like an intranet, that didn’t rely on the public data. And then of course agencies are trying to do disinformation regardless of AI, they’re always [publicly] putting out falsehoods. There’s no way to stop it. The [public] database [of all the information on the internet] is going to be poisoned by disinformation. So we better not rely on it.

Howard: ChatGPT differs from other browser search engines in that rather than returning a list of links to information and websites it can create a conversation. It can create a readable document. You’ve said that your big objection to ChatGPT is that it makes answers look very authoritative when it’s really making things up out of nowhere.

Tom: I’ll give you an example and I read it to the Standing Committee on National Defence. I asked ChatGPT to write me a poem about the committee …. ‘The standing committee on national defense/ within the House of Commons its power immense/ so they were all smiling. A place where decisions are made with care/ for the safety and security of all to share/ with members from every party they convene/ to review and assess and to make things clean.’ What does that even mean ‘to make things clean?’ I don’t know. ChatGPT is not going to tell us. Here we have something that’s patently nonsense coming out of ChatGPT.

Howard: What could threat actors do with ChatGPT? Or, what are they doing right now?

Tom: If we have an emergency of some sort that might be the first place people [threat actors] go. The power failed in my house. The bad guys might [send a message] like ‘Send one ten-thousandth of a bitcoin to this address and your power will come back on.’ It’s not that farfetched. I learned at the Defcon hacker conference how to hack the Nest thermostat a few years ago. You had to have hands-on access to update its firmware, but there are stories of people actually holding people’s houses for ransom by taking over their thermostats. So one of the big things to worry about is the internet-of-things. All these connected devices. Something might go horribly wrong and we might be relying on AI to fix it, when the AI is actually being led down the dark path to break it or make it even worse or to break all the safeguards.

Howard: What could a military do with ChatchGPT?

Tom: The military could certainly find out things that are public through open source information. I am able to track Vladimir Putin’s aircraft. It turns out he has quite a number of them. He’s a bit of an aircraft collector. He also has yachts. Because they have transponders I have been able to go on tracking sites. In fact, there’s a fellow who has a bot up on Twitter to track Putin’s movements and his oligarchs … And we have so much data. AI could be used to filter it [the public internet] to show the things that are really important [to them].

Howard: ChatGPT is new. I imagine that in the early years of computer spelling and grammar checkers and they made a lot of mistakes.

Tom: Definitely, and as the database gets better it will get better …

Howard: But I don’t think you’re arguing that we should make artificial intelligence applications unlawful.

Tom: No. But Ronald Reagan once said, ‘Trust but verify.’ So my slogan now is ‘Consult but verify.’ When my students write a long paper I say, ‘You want to use ChatGPT or Wikipedia or anything, that’s fine. What you’re not allowed to do is quote from it. First of all because Wikipedia can be misled. People can edit the entry. After a bit of time it gets corrected. But you might just be the one who picked it up while it was wrong. And with ChatGPT you don’t know where it’s getting its data from. At least Wikipedia gives you usually references that you can go check. So what I tell my students is you can use it and consult it, but don’t trust it. Don’t absolutely use it as your [only] source.

Howard: As part of new Canadian privacy legislation now before the House of Commons the government has proposed legislation to oversee the use of artificial intelligence applications that could cause harm or result in bias. It’s formally called the Artificial Intelligence and Data Act, or AIDA. Businesses that deploy what the law says are high-impact AI technologies would have to use them responsibly. There’d be clear criminal prohibitions and penalties regarding the use of data obtained unlawfully for AI development or where the reckless deployment of AI poses serious harm. What do you think about this legislation?

Tom: It’s terrible. They have my sympathy. I was involved in 1984 in writing Canada’s first computer crime law and we discussed things that were quite interesting, like what if somebody steals my data? Well, look up in the criminal code. What is ‘to steal?’ Well, it’s to deprive someone of their valuable property. If I take your data you may not even know I’ve got it. But you haven’t lost use of it. So we had to do some pretty fancy footwork [in drafting the law]. And that was 1984, to write something as simple as crimes like unauthorized use of computer, misuse of computer and so on. Now it’s so much more complicated.

I looked at C-27, and for starters, it talks about anonymized data. It makes a big thing about how you have to anonymize data if it’s in a high-impact system and say how you did it. But there are plenty of researchers who have shown it’s pretty easy to de-anonymize data if you have three, four, or five data points on somebody. You can go back to figure out who it is. Likewise, they talk about the person responsible. I make my students do an exercise where they do facial analysis. Most of the software programs that they use come from Moldova and places like that. I don’t want them to send their own photograph to be facially analyzed. So I let them send my face — and it comes back with interesting comments me.

The point is that this [proposed] law will only really help in Canada, but so much of the action is international it’s really going to be a drop in the bucket. It might keep you Tellus or Shaw or some company like that from doing something untoward. But it’s really going to be touching just the tip of the iceberg and maybe give us a false sense of security.

Howard: What should information security leaders be telling their CEOs about artificial intelligence and ChatGPT?

Tom: It’s going to be a great thing. It’s probably not going to take your job. It is true that ChatGPT can write code. I’ve experimented with it and you know it writes pretty decent code if you give it good enough specifications. If you’re a low-level coder it might take your job. But if you’re somebody who understands the business and the higher-level goals you’ll probably still have a job. So once we’ve reassured people that they’re not going to be replaced by a robot tomorrow then the question is can they use it? I have a friend who is the chief medical officer of a health clinic and I asked if radiologists be replaced by artificial intelligence. He said, no but radiologists who don’t use AiI will be replaced because it’s going to be a vital tool. There are tumors that are too small for the human eye to see. That’s something AI can pick up on. The future is actually rosey in terms of being able to use AI well. The problem is, like everything, there are going to be people who want to exploit it for bad purposes. We are already seeing malware being written phishing attacks, in romance scams trying to get money out of people. It’s going to do a lot of good. It’s going to do a lot of bad. It’s going to be our job to figure out which is which.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada
Howard Solomon
Howard Solomon
Currently a freelance writer, I'm the former editor of ITWorldCanada.com and Computing Canada. An IT journalist since 1997, I've written for several of ITWC's sister publications including ITBusiness.ca and Computer Dealer News. Before that I was a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times. I can be reached at hsolomon [@] soloreporter.com

Sponsored By:

Cyber Security Today Podcast