Deepfakes fueling the online anarchy: researchers weigh in on complexities of regulation

From U.S. President Joe Biden spewing transphobic discourse, the Pentagon in flames, and Donald Trump resisting arrest to the Pope ennobling the puffer jacket – we’ve seen it all, none of it real, and each image more ludicrous than the next.

With unprecedented advances in AI, deepfake, the ability to alter existing or even generate totally new content that is almost indistinguishable from “reality”, is becoming more and more convincing and sophisticated, while its uses are getting out of hand.

Last Monday, the CBC flagged a scam ad that surfaced on Meta’s Facebook and Instagram, featuring a deepfake of popular anchor Ian Hanomansing promising its viewers to “turn $28,000 a year to $28,000 a month” through some sort of investment scheme. One person said they did fall for it, having trusted the image and credibility of Hanomansing and ended up losing money, the CBC noted.

Cyber crooks are leaping at the opportunity to use the technology to, for instance, impersonate the children of elderly people and asking their parents to send money, or pretend to be someone’s boss demanding a transfer of funds.

The key problem is the democratization of the AI models and tools, facilitated by cheap technology and significant advancements in the technical output, noted head of insights at Capgemini Canada, Steven Karan.

The underlying technology behind deepfakes, known as generative AI, was fundamentally improved around 2018-2019, he explained. It came around at the same time as Google introduced the Transformer that revolutionized the performance of large language models in 2017, as it provided a new way of encoding and decoding the information that goes into AI algorithms.

At the same time, he added, the cost of compute and storage were plummeting drastically, allowing the hyperscalers to operate their platforms and technologies cost-effectively.

Now the technology has sneaked up on us and the government is still way behind the curve in addressing the defects and dangers that came along with it, Karan said. Even if the government does take action, he added, technology is still global; bad players from rogue states remain unhindered by the laws and regulations of another country.

That leaves us with the platforms, which, AI and data analytics researcher at Info-Tech Research Group Bill Wong says have been utterly ineffective, because they fail to be proactive. He added that it’s only when someone loses money or complains a lot that action is sometimes taken. 

Being proactive also means that the platforms should be able to attribute the source of generated content, Karan added. If someone puts out content, the platform should run an attribution algorithm to determine where it comes from, and whether it comes from a trusted source.

There exists technology, often AI-based, to screen deepfakes and other such content, Wong said, yet the modus operandi of these platforms is to apologize and explain why it’s so hard to take down such content.

The CBC, as a matter of fact, affirmed that the scam ad featuring the deepfake of Hanomansing was reported several times, but it kept reappearing.

Taking down content like this, Karan affirmed, is what’s been “troubling law enforcement agencies, for several decades now, since the advent of the internet.” It’s even more difficult when the host provider is outside the jurisdiction of a country or state, requiring inter-country efforts to take down content. 

But there is now, he stated, “a cloud oligopoly”, whereby all the big providers are based in either North America, Europe or Asia, creating a degree of concentration for accountability. Law enforcement agencies need to cooperate with these providers and leverage traceability tools to determine where else the content exists.

However, we, as consumers and sharers of content online, have a role to play too. As a society, we need to prioritize teaching critical thinking to users, across varying levels and demographics, Karan noted, adding that he hopes it gets to a point where it is a focus in schools so that the new generation is more equipped than we are today.

Wong concurred that the main line of defence remains education and training, notably on the prevalence of deepfakes as well as ways to detect them.

To be cyber security aware is also key, Wong stressed. “Everybody should be aware that you should do things like check the source. And if you receive a phone call from a person that sounds like your granddaughter asking for money, you should educate people that the common form of scams is always to introduce you to something familiar and always give you a situation that’s difficult. And then a response time that is unreasonable.”

People who end up getting victimized, especially by deepfakes, unfortunately, have a long fight ahead, especially when the damages are financial, reputational or to their safety. Getting platforms to take down content is one thing, but the legal battle is another arduous and expensive struggle, and that again calls on the need for legislation, which Karan said, exists only in pockets today.

He added, “In Canada, we’re fundamentally behind where we need to be in terms of protecting citizens from cybercrime, from deepfake crimes, and harms caused by generative AI. We don’t have the regulations in place at a country level to protect our citizens adequately. And it is a fundamental gap. There’s just no way around it.”

Wong thinks the harm surrounding generative AI will continue to get worse before it gets better. Human nature, he said, is not taking action until disaster hits. Meanwhile, it’s going to plague election campaigns and spout fake narratives. The only upside, he added, is hoping that the good actors will use the same technology to counter it.

Karan worries about “the literal terabytes” of deepfake content that can be put out so quickly that it overwhelms all the existing control measures that platforms have in place today. 

He speculates,  “Could it get to a point where the volume of deepfakes is just so high that no source of information can really be trusted? What does that do at a society level? Does everyone go back to an analog world and go offline?”

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Ashee Pamma
Ashee Pamma
Ashee is a writer for ITWC. She completed her degree in Communication and Media Studies at Carleton University in Ottawa. She hopes to become a columnist after further studies in Journalism. You can email her at [email protected]

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now