Cyber Security Today, Week in Review for Friday, May 19, 2023

Welcome to Cyber Security Today. This is the Week in Review edition for the week ending Friday, May 19th, 2023. I’m Howard Solomon, contributing reporter on cybersecurity for ITWorldCanada.com and TechNewsday.com in the U.S.

Cyb er Security Today on Amazon Alexa Cyber Security Today on Google Podcasts Subscribe to Cyber Security Today on Apple Podcasts

In a few minutes David Shipley of New Brunswick’s Beauceron Security will be here to discuss recent news. But first a roundup of some of what happened in the last seven days:

A U.S. Senate committee held the first of a series of hearings on possible federal regulation of artificial intelligence. The chief executive of OpenAI, a senior IBM official and an AI entrepreneur all called for some sort of regulation. David will have some thoughts.

We’ll also look at a new use of facial recognition at U.S. airports, how a cybersecurity company was fooled by a hacker impersonating a new employee and the publication by a ransomware gang of building schematics from an American school board.

In other news, Montana became the first U.S. state to ban TikTok. Federal and state government employees have been prohibited from downloading the app on government devices for security reasons. But this law prohibits an American-based internet provider from offering TikTok for download.

The BianLian ransomware group has stopped bothering to encrypt victims’ data when it compromises an IT network. Instead it just steals data and then threatens to release it unless the gang is paid.

ScanSource, an American provider of technology solutions, has acknowledged being hit by a ransomware attack last weekend. In a statement Tuesday it said the company is working to get the business fully operational. The statement says there may cause business problems for customers and suppliers in North America and Brazil.

The U.S. has announced criminal charges in five cases as a result of work done by its new Disruptive Technology Task Force. This is a multi-department group that goes after countries trying to illegally get sensitive American technology. Two of the five cases involve networks allegedly set up to help Russia buy U.S. technology. Two other cases saw former software engineers charged with stealing software and hardware code from their companies for Chinese competitors. The fifth case involved a Chinese network for providing Iran with materials for weapons of mass destruction and ballistic missiles.

Separately, the U.S. Justice Department identified a resident of Russia a member of the LockBit, Babuk and Hive ransomware gangs. He was allegedly involved in attacks on American organizations and others around the world that allegedly pulled in US$200 million.

An unusual ransomware group has emerged. According to Bleeping Computer, after the MalasLocker group hits an organization it asks the firm to made a donation to a nonprofit the gang approves of. For proof the firm has to forward an email confirming the donation. Then it will give the firm a data decryptor. Is this a stunt? I don’t know. The gang is going after unprotected Zimbra email servers.

Hackers are actively looking to exploit a recently revealed vulnerability in a WordPress plugin. This time its a plugin called Essential Addons for Elementor. According to a security firm called Wordfence, last week a patch for that vulnerability was released. Since then Wordfence has seen millions of probing attempts across the internet looking for WordPress sites that haven’t yet installed the fix. Which means if your site uses Essential Addons for Elementor and hasn’t installed the update, you could be in trouble.

Threat actors are increasingly hunting for vulnerable APIs to compromise. That’s according to researchers at Cequence Security. In fact, they say, in the second half of last year there was a 900 per cent increase in attackers looking for undocumented or shadow APIs.

A hacking group is exploiting an unpatched six-year-old vulnerability in Oracle WebLogic servers. Trend Micro says the 8220 (Eighty-two twenty) Gang is using the hole to insert cryptomining software into IT systems. The gang is going after Linux and Windows systems using WebLogic.

And researchers at Claroty and Otorio are warning administrators to patch industrial cellular devices on their networks from Teltonika TELL-TONIKA Networks. Certain models have several vulnerabilities affecting thousands of internet devices around the world. Patches have been issued and need to be installed fast.

(The following is an edited transcript of one of the four topics discussed. To hear the full conversation play the podcast)

Howard: Topic One: Regulating artificial intelligence. Most people realize the use of AI needs some sort of oversight. But what kind? At a U.S. Senate hearing this week witnesses raised several ideas: A licencing regime, testing for bias, safety requirements, even a global agency so there will be worldwide standards. David, where should governments go?

David Shipley: I think there’s a good reason why OpenAI’s CEO suggested licensing AI firms. That would be a hell of a competitive moat for the current leaders like his firm and others, and a giant barrier for any new entrant — and I think for that reason it’s a terrible idea. That isn’t to say that governments don’t need to do things. I think the idea of a global [regulatory] agency with worldwide reach is just pure fantasy. But I think governments need to think within their countries how to proportionally manage the risk of AI with a harm-based approach. That makes the most sense. Do we need big government to police Netflix AI for recommending television shows? Probably not. Do we need regulation on firms that use AI to screen job applicants or use AI in health diagnosis or for facial recognition for police use or AI in self-driving cars? Absolutely.

Howard: What does a harms-based system look like?

David: Number one it has to look at what is the scale of the company, their reach etc. Is it a brand-new startup? Does it have a couple hundred or a couple thousand users? The proportional risk is partly the reach of the platform, and partly the nature of the work that it might be doing. Again, if it’s a startup making a self-driving AI for a car, then it should be heavily regulated. If it’s making an AI to help you proofread your emails, maybe not as big a deal.

Howard: Can the way we regulate privacy set precedents? In various jurisdictions there are privacy obligations for companies to do certain things or else they’re they’re offside of the legislation. Can we see something that’s done in Canada or the or EU or California that would help guide people who want to create Ai regulations?

David: I think there are some good elements in all of the privacy regulations that we’ve seen related to the concepts of privacy by design, which was invented by Canadian Ann Cavoukian when she was Ontario privacy commissioner. They make sense when considering AI regulation. But AI regulation is far more complex than privacy regulation. Good lessons from privacy by design that we can apply is making sure that users have informed consent, that people understand that they’re using products that have algorithmic decision-making, that AI systems are built and designed with security and privacy in mind from the conception stage to the ongoing stage [deployment] and to the management of the end of life of the product. I think modern privacy legislation can set some of the conditions for the kinds of data AI can work with. Legally, I think it’s really important. And they can be very complimentary. But AI regulation needs to set the conditions on when and how artificial intelligence-derived decisions based on lawfully gained data can be used. Particularly when it has an impact on human life, economic opportunities, health or well-being.

Howard: One of the things that people worry about is bias in AI systems. How do you do mandate an AI system be transparent about bias?

David: This gets to the heart of what we need AI regulation to do. There are two parts to this: Companies should be able to explain clearly how their AI made its decision, how the algorithm works. This idea of black box AI or machine learning that no one quite knows how it figured out the decision is made is not okay, because you don’t have the ability to dispute it, to correct it, to find out if there are biases. That means that companies have to do a better job of documenting their AI. And if you thought developers complain today about documenting code, welcome to the new and absolutely essential nightmare of AI algorithms. We’ve got to understand how these things work. Also, AI regulations should make it possible for regulators to review any kind of training datasets that were used by firms to identify any issues such as systemic, explicit or implicit bias and to provide a review point for any firms or individuals who may challenge AI companies for the potential use or misuse of copyrighted materials used to train their system.

This leads me to the most hilarious example so far I’ve seen with ChatGPT and a group of fan fiction writers for a very popular television show known as Supernatural. They learned that a particular app called SudoWrite, which uses ChatGPT3, knew about a very particular and obscure sex trope that they had created within their fan fiction forum because the language model had scraped their site without necessarily their consent. And, hilariously, it knew how to use this trope in the appropriate context for writing. [See this Wired magazine for more] It highlights the point I was making about the ability to audit the training dataset that companies may be using that may or may not have had proper consent.

Howard: Should countries have a national approach to AI? I ask because in the U.S. the Biden administration has suggested a sectoral approach to AI. So AI regulation might be different for the health sector the labor sector and education.

David: I do think a sectorial approach makes more sense. National AI regulation is going to be broad in scope. When it comes to actually applying the regulations it’s going to have to get sectoral anyway. Are we really going to get that worried about the application of artificial intelligence to make farm tractors more efficient? No. I do have deep concerns about the use of Ai for [medical] diagnoses and reviewing judicial judgments in the legal space, for hiring practices and of course for what it can teach people in education [institutions].

Howard: One suggestion is that at least individuals should be told when they’re interacting with an AI system, either online through text or voice. As one U.S. Senate witness said, no one should be tricked into dealing with an AI system.

David: I 110 per cent agree with this. When Google demoed its AI assistant concept a few years back that could call people on your behalf to book things like hair appointments it had natural-sounding language. It could do “ums” and “ohs” and pauses. It had a great command of conversation. It creeped the hell out of me because someone could be interacting with AI on someone’s behalf and not realize it. People absolutely need to be told upfront by an AI when they’re engaged with it. I want to refer back to people not knowing they’re engaging with a bot. The Ashley Madison breach in 2015 revealed that many of the would-be cheaters [on their partners] were actually engaging with a chatbot [in text converstations] to sucker them into buying more credits for conversations with people they were trying to have affairs with who turned out to be bots. Companies should face big consequences if they deceive people into thinking they’re interacting with a real human being when in fact, they’re communicating with an AI.

Howard: There was a suggestion from one of the witnesses who testified this week that there be a cabinet-level department of AI, with a Secretary of AI.

David: It’s an interesting concept. If that role was a co-ordinating one to help the whole of government understand when and where to regulate and look for problem areas with AI it might make a lot of sense. In the same way that in Canada we have a cabinet position for Finance that helps set the direction of the budget, and then each individual department goes off and does their thing. I also say in Canada we should have a cabinet-level position for cybersecurity that performs a co-ordinating function. But the challenge with some of these big wicked problems in government is what we saw with the White House and the loss of Chris Inglis when there was infighting about who should be responsible for what. [Inglis was the first U.S. National Cyber Director. He served from July 2021 until resigning in February.] So unless it’s a co-ordinating role you’re going to end up with good old human politics.

Howard: To close this topic I note that the chairman of the Senate committee this week also said the AI industry doesn’t have to wait for the U.S. Congress to be proactive. And by that I think he meant companies can be responsible without regulation.

David: Absolutely not. The short-term pressures of a modern capitalist economy will force people into building things because they can, because they’re afraid someone else is going to build it there first and they’re going to miss that economic and opportunity. And the consequences to society of this can impact individuals in deep, meaningful ways. AI might make the restructuring of jobs and sectors in ways that we don’t fully understand. I don’t think there’s anybody today who could say with absolute confidence when the internet rolled out with the fanfare it did in the mid-1990s they saw Amazon becoming the global economic powerhouse it is now. The way that the web has changed your life with social media, I don’t think people saw that in 1994. I don’t think we fully see all the consequences of AI. We leave industry to make its own decisions at our societal peril.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada
Howard Solomon
Howard Solomon
Currently a freelance writer, I'm the former editor of ITWorldCanada.com and Computing Canada. An IT journalist since 1997, I've written for several of ITWC's sister publications including ITBusiness.ca and Computer Dealer News. Before that I was a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times. I can be reached at hsolomon [@] soloreporter.com

Sponsored By:

Cyber Security Today Podcast