Threat actors will take advantge of ChatGPT, says expert

Microsoft, software developers, law enforcement agencies, banks, students writing essays and almost everyone in between thinks they can take advantage of ChatGPT.

So do threat actors.

The artificial-intelligence-driven chatbot is touted as the search engine that will dethrone Google, help developers generate flawless code, write the next great rock hit … heck, it’s so new people can’t imagine what it can do.

But history shows crooks and nation-states will try to leverage any new technology to their advantage, and no infosec professional should expect any different.

So, says a threat researcher at Israel-based Cyberint, they’d better be prepared.

If ChatGPT will help software companies write better code, said Shmuel Gihon, it will do the same for malware creators.

Not only that, he added, it could help them reverse-engineer security applications.

“As a threat actor, if I can improve my hacking tools, my ransomware, my malware every three to four months, my developing time might be cut by half or more. So the cat-and-mouse game that defence vendors play with threat actors could become way harder for them.”

The “if” in that sentence is not because of the capability of the tool, he added, but the capabilities of the threat actor using it. “AI in the right hands might be a very strong tool. Professional threat actors, ransomware groups and espionage groups will probably make better use of this tool than amateur actors.

“I’m pretty sure they will find great uses for this technology. It will probably help them reverse engineer software they are attacking … help them find new vulnerabilities, and bugs in their own code, in shorter periods of time.”

And infosec pros shouldn’t just worry about ChatGPT, he added, but any tool driven by artificial intelligence. “Tomorrow another AI engine will be released,” he noted.

“I’m not sure security vendors prepared for this rate of innovation from the threat actors’ side,” he added. “This is something we should prepare ourselves for. I know AI is already embedded in security tech, but I’m not sure if it’s at this level.”

Security vendors should think about how threat actors could use ChatGPT against their applications, he advised. “If some of my products are open source or my front-facing infrastructure is built on engine X, I should know what ChatGPT says about my technology. I should know how to translate ChatGPT capabilities in the threat actors’ eyes.”

At the same time, CISOs should see if the tool can be leveraged to help protect their environments. One possibility: Software quality assurance.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Howard Solomon
Howard Solomon
Currently a freelance writer, I'm the former editor of ITWorldCanada.com and Computing Canada. An IT journalist since 1997, I've written for several of ITWC's sister publications including ITBusiness.ca and Computer Dealer News. Before that I was a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times. I can be reached at hsolomon [@] soloreporter.com

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now