BEST OF THE WEB

How threat actors will abuse AI to defeat cyber security

“There are no silver bullets in cyber security,” is old but accurate advice. So CISOs who hoping that artificial intelligence/machine learning will do more than merely take the load off an already over-worked security team are dreaming. AI won’t be the silver bullet that creates an impenetrable wall around the enterprise.

In fact, warns SecurityWeek columnist Kevin Townsend today, AI is just as likely to be used against an enterprise — and AI-based products it uses — by malicious actors as it is to prevent attacks.

His column is an expansion and commentary on an academic article last month on the potential malicious misuses of artificial intelligence. Briefly, the paper states what every infosec practitioner should know: Any tool will be turned against you. CISOs probably don’t think about it, but AI processes and algorithms will have vulnerabilities that can be exploited. And it’s not merely that threat actors can use AI/ML to automate processes, the report notes. The technology shows signs of being able to generate synthetic images, text, and audio which could be used to impersonate others online, or to sway public opinion by distributing AI-generated content through social media channels.

Think fake news is common now? Just wait.

“There is currently a great deal of interest among cyber-security researchers in understanding the security of ML systems,” Townsend quotes the paper as saying, “though at present there seem to be more questions than answers.”

Townsend quotes a security vendor who notes the report doesn’t mention one type of potential attack: Using AI to undo the de-anonymization of data.

Still, the paper does talk about many other attacks likely to be seen soon if adequate defenses aren’t created. It also warns that once governments realize the implications they will be tempted to step in. As a result the authors urge policymakers to collaborate closely with technical researchers to investigate, prevent, and mitigate potential
malicious uses of AI. In addition, they say AI researchers and developers “should take the dual-use nature of their work seriously” and consider how their work could be abused.

The good news is the cyber security industry is aware of the problem, says Townsend. AI can be leveraged to daily (or hourly) audit the configuration of an environment for changes or compliance with security best practices, says one vendor.

In the meantime what can a CISO do? Last year I cited a column by a security vendor who suggested questions infosec leaders should ask of providers whose solutions include artificial intelligence. That’s a good start. Another is reading the research paper.

Howard Solomon
Howard Solomon
Currently a freelance writer, I'm the former editor of ITWorldCanada.com and Computing Canada. An IT journalist since 1997, I've written for several of ITWC's sister publications including ITBusiness.ca and Computer Dealer News. Before that I was a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times. I can be reached at hsolomon [@] soloreporter.com

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

ITW in your inbox

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

More Best of The Web