Cyber Security Today, Week in Review for the week ending Friday, March 31, 2023

Welcome to Cyber Security Today. This is the Week in Review for the week ending Friday March 31st, 2023. From Toronto, I’m Howard Solomon, contributing reporter on cybersecurity for ITWorldCanada.com and TechNewsday.com in the U.S.

Cyb er Security Today on Amazon Alexa Cyber Security Today on Google Podcasts Subscribe to Cyber Security Today on Apple Podcasts

In a few minutes David Shipley of Beauceron Security will be here to talk about recent events. But first a look back at some of the headlines from the past seven days:

There were calls from prominent names in technology including Elon Musk and Steve Wozniak for a six-month pause in developing advanced artificial intelligence systems. Is this needed, or is it is a cry from competitors who can’t keep up? David and I will examine this.

We’ll also talk about the future of TikTok in the United States after the CEO’s testimony before Congress last week.

And because today is World Backup Day, when IT leaders should be thinking about the effectiveness of their data backup strategy, we’ll have some thoughts.

Also in the news, researchers at Rapid7 warned IT departments aren’t fast enough in patching a vulnerability in IBM’s Apsera Faspex file transfer application. Don’t wait for your regular patch cycle, they advise, because a working proof of concept exploit has been in the wild since February.

Microsoft said a newly-discovered vulnerability in Outlook for Windows may have been exploited for almost a year. The revelation came as the company this week issued detailed guidance for IT defenders hunting for signs their servers have been compromised. Defenders should use an in-depth and comprehensive threat-hunting strategy to identify potential credential compromise. Microsoft issued a patch for this on March 14th.

The largest pharmaceutical maker in India, Sun Pharmaceuticals, said it has been hit by ransomware. In a stock exchange filing the company said the attack included the theft of some personal and company data.

In a March 8th podcast I told you the LockBit ransomware gang listed a Florida county sheriff’s office as one of its latest victims. This week the gang published stolen data.

New regulations came into effect in the U.S. this week allowing the Food and Drug Administration to reject new medical devices that don’t meet cybersecurity standards. Manufacturers are obliged to release security updates and patches, as well as provide a software bill of materials.

A collections agency is notifying almost 500,000 American residents their data was stolen. NCB Management Services said the information included details of victims’ former Bank of America credit card accounts, such as their names, addresses, date of birth and Social Security number.

And two new variants of the IcedID malware have been discovered. Researchers at Proofpoint say the purpose is to use the malware for delivering more payloads. The original IcedID is still around for stealing bank login credentials.

(The following is an edited transcript of one of the discussion topics. To hear the full talk play the podcast)

Howard: Let’s start with the letter from some technology leaders who are calling for a six-month pause on the training of AI systems more powerful than version four of ChatGPT. This is the chatbot that can return internet searches in sentences or paragraphs and create a flowchart. By coincidence or not the letter came a day after Microsoft revealed an upcoming tool that uses ChatGPT-4 to help security teams track down IT network compromises. You can ask the tool, which is called Microsoft Security Copilot, about a possible attack and it searches your IT systems to find evidence of compromise and create a report. Experts have worried for some time that automated as well as artificial intelligence systems can be biased against women and people of color when used for screening, job and insurance applications or in facial recognition. David, when you read this letter calling for a pause. What did you think?

David Shipley: They raise some valid concerns. AI has been found through peer-reviewed studies to have been biased both implicitly and explicitly against various groups and has had flawed decision-making. So there’s a reason to be concerned. However, there are some overblown concerns. I think it’s really important for listeners to acknowledge we are nowhere close to a general AI. What we have today is a hyper-accurate guesser, a digital parrot that can say really clever things but has no idea what it’s talking about. And the pause that’s being advocated for if it’s needed is not nearly long enough. For example, here in Canada the proposed laws to even regulate the potential harms of ai. AI isn’t even through the [federal] legislative process — let alone the additional substantial discussions on how to implement it in regulations. We’re probably two to two-and-a-half years away from that even being practical in terms of approved regulations approved and the resources available for policing them …

This discussion also comes at a time when the Council of Canadian Academics just published an expert panel report commissioned by Public Safety Canada raising a red flag that digital risks have outpaced society’s current ability to handle them — and that’s just with this generation of AI. We need to act to reduce those harms now.

Howard: While those signing the letter included university professors there were also heads of technology companies who might be considered competitors to AI leaders. So is is this about Jealousy or envy that a competitor has a better solution than my firm?

David: I can’t completely dismiss that concern, but because there are so many other signatories to this letter, including respected academics from Berkeley and MIT, I honestly don’t think that’s the primary motivation for this letter. I think the folks who signed on it are genuinely afraid that this profit-centric, mad scramble — almost like John Hammond in Jurassic Park, a mad race to do something cool without stopping to think of the consequences of doing that thing. This Is the driving force behind the strength of this letter and and the call to action.

Howard: The people who signed this letter agreed with the statement “Advanced AI represent a profound change in the history of life on Earth. That’s really dramatic. If you want to extend that, it’s scaremongering.

David: That could be a valid criticism. I think we are at risk of overestimating AI’s capability. On the other hand there are some legitimate concerns that this technology could impact one in four jobs, which could have a profound impact on our modern economy. So while the scaremongering may be too much, there may be more of a point here than we can be comfortable with. I think the ultimate truth is I don’t think people making this technology fully understand it, and that’s where this risk is coming from. It’s what [former U.S. Secretary of Defence] Donald Rumsfeld once called the ‘unknown unknowns.’ And the danger here is we keep repeating the same patterns over and over again. If we’re overconfident in technology and we disregard risks. We may even be able to anticipate, and we did this with the Titanic: They they knew April was a dangerous month [to sail] when that ship left on its maiden voyage. They they knew things they could have done, but they were so overconfident in the technology –they had wireless communications so they could call for help and the watertight compartments — they ran full steam ahead into an iceberg without even enough lifeboats.

Howard: But it wasn’t a matter that April was a dangerous month to sail. It was a matter of April was a dangerous month to sail on the course that they plotted, which was probably the shortest course between England and New York. It wasn’t dangerous if what you did was plot your course a hundred miles farther south, which would have been slower but on the other hand reduces the risk of running into an iceberg.

David: You’re hitting on exactly the point: If they had just slowed down they still could have arrived at their destination successfully. But it was the speed that they wanted to go to cut the time, to optimize their journey, to focus on reducing their cost, to make more money that landed them in the pickle they’re in.

Howard: If AI is about creating computers that simulate human thinking and behavior, is ChatGPT an AI system? It only looks where it’s pointed to. If it’s pointed at the internet and a possible solution to a question isn’t on the internet then it won’t be found.

David: On this point you couldn’t be more correct. We are not, thank God, staring down the face of an actual artificial intelligence. By that I mean something with genuine consciousness. That’s down the road. That’s a whole new problem. We’re looking [at ChatGPT] at a guessing machine. It’s a highly accurate guessing machine that knows how to string words and images together in patterns we can recognize. Sometimes those patterns are spot on and sometimes they’re dead wrong. For example, when it was trying to come up with ah a local restaurant recommendation for a reporter who was testing it, it just made up a fake restaurant and a fake location. And when it got challenged it tried to gaslight the journalist. That’s incredibly concerning. But it doesn’t truly understand the meaning of what it’s saying and it lacks human insight.

Howard: Which brings us to the proposed AI legislation that the Canadian government put before Parliament nine months ago as part of privacy legislation reform. This proposed legislation is called the Artificial Intelligence and Data Act. It would mandate businesses that use high-impact AI technologies to use them responsibly, including identifying and mitigating risk.s An AI data commissioner would enforce regulations — which have yet to be detailed. So there’s a lot we don’t know about this legislation. Two questions: Is this proposed legislation reasonable and practical? And why hasn’t the government made passing this legislation plus the privacy legislation a priority?

David: I’ve read through the legislation and I think a big battle is going to be fought over the definition of ‘high-impact AI.’ And the meat of that discussion is being put punted to regulation. It’s going to be really interesting to see how that plays out. My my concern with this regulation also goes back to the ‘unknown unknowns.’ If you didn’t know that the consequences was actually going to result in a high impact, how could you potentially police this ahead of time? It’s a hell of a problem. Don’t get me wrong: We need regulation of AI just like we needed to force car makers to put in seatbelts. However, even if we get over the massive hurdle of getting regulations we’ll end up with an AI commissioner who is deeply powerless in the ways that our [federal] Privacy Commissioner has been.

As for your second question, why it’s not been a priority, arguably the pandemic put the government and its ambitious digital charter agenda completely on its on its back. The pandemic slowed down the government and forced its focus to shift to the public health emergency.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada
Howard Solomon
Howard Solomon
Currently a freelance writer, I'm the former editor of ITWorldCanada.com and Computing Canada. An IT journalist since 1997, I've written for several of ITWC's sister publications including ITBusiness.ca and Computer Dealer News. Before that I was a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times. I can be reached at hsolomon [@] soloreporter.com

Sponsored By:

Cyber Security Today Podcast