Zuckerberg calls for guidance from governments on defining what social media should ban

When the Internet was created freedom of speech advocates hailed it as a network that would liberate communications and ideas between people around the world.

That was then. This is now.

On the weekend no less a media giant than Facebook CEO Mark Zuckerberg called on governments to help social media companies smother the increasing use of their platforms for disinformation, misinformation, hate and cybercrime, while balancing freedom of expression.

There needs to be a “more standardized approach” for helping private companies censor content, he wrote. “One idea is for third-party bodies to set standards governing the distribution of harmful content and to measure companies against those standards. Regulation could set baselines for what’s prohibited and require companies to build systems for keeping harmful content to a bare minimum,”

He also said countries should adopt privacy regulations similar to the European Union’ General Data Protection Regulation (GDPR) “as a common framework.”

“I also believe a common global framework – rather than regulation that varies significantly by country and state – will ensure that the internet does not get fractured, entrepreneurs can build products that serve everyone, and everyone gets the same protections,”

Australia may soon give Zuckerberg more than he wants. Two weeks ago, following a massacre of 50 people in nearby New Zealand that the shooter broadcast live on Facebook, Prime Minister Scott Morrison called for a crackdown.

“If they can write an algorithm to make sure that the ads they want you to see can appear on your mobile phone, then I’m quite confident they can write an algorithm to screen out hate content on social media platforms,” Morrison was quoted as saying.

For his part, Zuckerberg said Facebook will this week begin deleting “praise, support and representation of white nationalism and white separatism on Facebook and Instagram.”

These and other worries by democratic governments around the world about the malicious use of the Internet brings into sharper focus a proposed May 26 meeting in Ottawa between an international group of politicians and social media.

The invite is from the House of Commons ethics and privacy committee and the so-called  International Grand Committee on Disinformation and ‘Fake News,’ elected politicians from nine countries (not including the U.S.), who invited executives from Facebook, Google, Amazon, Twitter, WhatsApp and Snap  for a public discussion on halting disinformation on their platforms.

It isn’t known yet how many have accepted the invitation.

In December the ethics committee published a report calling on the government to pass a law forcing social media companies to delete “manifestly illegal content in a timely fashion” including hate speech harassment and disinformation.

Zuckerberg’s weekend plea was published in the Wall Street Journal and Ireland’s Independent — where you need a subscription to read it — and summarized here by The Verge.

Zuckerberg was quoted as saying  “Internet companies should be accountable for enforcing standards on harmful content,” but government and regulators need to set standards for defining what constitutes harmful content, and guidelines for removing it from online platforms.

For example, Zuckerberg said, there are grey areas in some countries when it comes to determining what is a political ad.

As The Verge notes, Zuckerberg’s article is a change from his  comments almost exactly a year ago, when he said  he wasn’t “sure we shouldn’t be regulated” and felt that there was a role for regulators, provided it was the “right” regulation, and that he felt that “guidelines are much better than dictating specific processes.”

Social media companies are increasingly facing pressure from governments and the public about their unchecked content, pressure that only increased with the allegations that Russia played a big role in spreading disinformation through social media during the 2016 U.S. federal election.

The uproar after the New Zealand shooting video was copied around the world has only intensified the heat.

As an analysis in Sunday’s New York Times noted, people are wondering if Facebook, YouTube and Twitter should be treated like print publications and expected to vet every post, comment and image before they reached the public.

Meanwhile social media companies are uncomfortable hiring an increasing number of people to be censors — albeit after-the-fact censors. Google alone, the Times pointed out has hired 10,000 reviewers to scan controversial content.

It can’t help but be noted that those employees don’t help increase revenue. That has lead some to call for an end to the ad-supported revenue model social media favour because it encourages content — blogs, tweets, photos, video — particularly content that stirs people up. The more emotional the content the more it gets passed around.

German law

Last year, the Times said Germany passed a law making social media platforms liable for not deleting content that is “evidently illegal” in that country.  Companies that systematically fail to remove illegal content within 24 hours face fines of the equivalent of US$56 million.

In its December, 2018 report, the Canadian House of Commons committee on ethics and privacy recommended Parliament pass laws forcing social media platforms

• to clearly label content produced automatically or algorithmically (for example, by bots);
• to identify and remove inauthentic and fraudulent accounts impersonating others for malicious reasons;
• to adhere to a code of practices that would forbid deceptive or unfair practices and require prompt responses to reports of harassment, threats and hate speech and require the removal of defamatory and fraudulent content.

It also called on the government to”enact legislation imposing a duty on social media platforms to remove manifestly illegal content in a timely fashion, including hate speech, harassment and disinformation, or risk monetary sanctions commensurate with the dominance and significance of the social platform, and allowing for judicial oversight of takedown decisions and a right of appeal.”

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Howard Solomon
Howard Solomon
Currently a freelance writer, I'm the former editor of ITWorldCanada.com and Computing Canada. An IT journalist since 1997, I've written for several of ITWC's sister publications including ITBusiness.ca and Computer Dealer News. Before that I was a staff reporter at the Calgary Herald and the Brampton (Ont.) Daily Times. I can be reached at hsolomon [@] soloreporter.com

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now