New tool protects open source AI from malware and code compromise

In the digital age, a new kind of Trojan horse has emerged in the form of AI models laced with malicious code. The AI community got a jolt from Protect AI’s revelation that a staggering 3,354 models on Hugging Face, a go-to AI model depot, contained potential malware or compromised code.

Worse, it also appeared that Hugging Face’s security scans missed the threats in a third of these compromised models.

This has led a company called Protect AI to develop a scanner tailored to detect malware and compromised code in open source AI models.

Open source AI models are gaining in popularity given the costs associated with building and training a proprietary model.

This has made platforms like Hugging Face incredibly popular but, if Project AI’s numbers are correct, it has also made them a potential source of compromised AI code.

Protect AI’s scanning software is one potential tool to detect these issues and ensure the safety of open source AI models.

How will Protect AI keep up to date on threats? They have acquired a bug bounty program aimed at AI models called Huntr which they hope will provide them with continuing insights into new threats as they evolve.

Sources include: Axios

Jim Love
Jim Love
I've been in IT and business for over 30 years. I worked my way up, literally from the mail room and I've done every job from mail clerk to CEO. Today I'm CIO and Chief Digital Officer of IT World Canada - Canada's leader in ICT publishing and digital marketing.

Would you recommend this article?


Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.

Jim Love, Chief Content Officer, IT World Canada

Featured Download

ITW in your inbox

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

More Best of The Web