Site icon IT World Canada

Facebook bans Canadian groups for violating hate policy

Facebook has banned a number of people and organizations from its sites for their extreme statements.

“Individuals and organizations who spread hate, attack, or call for the exclusion of others on the basis of who they are have no place our services. That’s why we have a policy on dangerous individuals and organizations, which states that we do not allow those who are engaged in offline ‘organized hate’ to have a presence on our services,” a Facebook spokesperson told Daily Hive.

“The individuals and organizations we have banned today violate this policy, and they will no longer be allowed a presence on our services. Our work against organized hate is on-going and we will continue to review individuals, Pages, groups and content against our Community Standards.”

Those banned and their pages and linked groups include commentator and former Toronto mayoralty candidate Faith Goldy, Kevin Goudreau, Canadian Nationalist Front, Aryan Strikeforce, Wolves of Odin, Soldiers of Odin and Canadian Infidels.

The move comes after Facebook said on March 27th that will start cracking down on “praise, support and representation of white nationalism and white separatism on Facebook and Instagram.”

Coincidentally it also came as the U.K. government released a proposal to regulate online content providers (see below).

Embarrassed by what some people have been publishing on their platforms, social media companies are increasingly scrutinizing content that goes up on their sites. At the same time Western governments are increasingly considering regulating sites to at least make it mandatory they take down offensive content fast.

One problem is defining offensive content without violating freedom of speech laws.

In a March 30 op-ed article Facebook CEO Mark Zuckerberg urged governments to at the very least come together with a “more standardized approach” to guidance for social media. “One idea is for third-party bodies to set standards governing the distribution of harmful content and to measure companies against those standards,” he said. “Regulation could set baselines for what’s prohibited and require companies to build systems for keeping harmful content to a bare minimum,”

By coincidence, as Facebook was announcing its bans the U.K. began a public consultation on creating an independent regulator for online safety, which will set obligations on Web companies to counter illegal content and activity, including hate. The consultation will be around a white paper on online harms with a number of proposals.

The aim is to create a regulatory regime that covers everything from cyber bullying of children, child porn, intimidation, disinformation, the sale of illegal goods, terrorist activity and hate speech — while not interfering with free speech.

Not included are harms to organizations (such as theft, fraud), which are subject to another initiative) or harms suffered by individuals that result directly from a data breach, cover by other law.

Proposal

Briefly, the government proposes creating a new statutory duty of care to make companies take more responsibility for the safety of their users and tackle harm caused by content or activity on their services. Compliance will be covered by a code of practice. They will have to file annual transparency reports outlining the prevalence of harmful content on their platforms and what  ountermeasures being taken to address them.  The regulator will also have powers to require additional information, including the impact of algorithms used to select content shown to users.

Companies will need to respond to users’ complaints within an appropriate timeframe and to take action consistent with the regulatory framework.

Online companies covered would include those allowing users to share or discover user-generated content or interact with each other online.

To meet both possible threats to national security or the physical safety of children the government will have the power to direct the regulator in relation to codes of practice on terrorist activity or child sexual exploitation and abuse online. Those codes will have to be signed off by the U.K. Home Secretary.

The goal of proposed regulatory framework is to set clear standards, the white papers says, to help companies ensure the safety of users while protecting freedom of expression, especially in areas of harmful content or activity that may amount to criminal threshold but can still be damaging. The government also hopes it will promote a culture of continuous improvement among online companies, and encourage them to develop and share new technological solutions rather than complying with minimum requirements.

Exit mobile version