Findings from a new survey on online hate are bolstering the case for the federal government’s promised regulatory changes for social media companies.
The poll, conducted by Abacus Data for the Canadian Race Relations Foundation, documents widespread concern among Canadians about the rise and impact of online hate. A majority of Canadians support more government action to combat hate and racism online.
Released on Monday, the survey shows that one in five Canadians have experienced one or more types of online hate. Racialized individuals are around three times more likely to have experienced hate speech than their non-racialized counterparts.
Notably, 42 per cent have experienced or seen content inciting violence, while 32 per cent have experienced or seen physical threats.
This is in part due to the staggering volume of online hate on platforms such as Facebook, Instagram, YouTube and Twitter. Ryan Chan, a project lead on social media and online hate at the Chinese Canadian National Council for Social Justice, says a tool being developed in part by his group to track anti-Chinese racism related to COVID-19 has already sifted through hundreds of millions of hateful tweets. Twitter sees an average of 500 million tweets posted each day.
And as shown by recent events, online hate can have real life consequences. For instance, former president Donald Trump’s supporters, many of whom belong to far-right militias and extremist groups, organized and incited violence online in advance of the Jan. 6 insurrection at the U.S. Capitol.
“It is also evidence that Canada is far from immune to online expressions of hate and racism,” said CRRF executive director Mohammed Hashim in a press release.
Since the start of the COVID-19 pandemic, Canada has seen a spike in anti-Asian physical assaults and harassment coinciding with the spread of anti-Asian hate online. Anti-Indigenous, anti-Black and Islamophobic online content also remain rampant.
The survey, conducted from Jan. 15 to 18 with 2,000 Canadian residents, found that 60 per cent believe the federal government should do more to tackle online hate. A large majority also said social media platforms should step up, with 80 per cent supporting a requirement that social media companies remove hateful content within 24 hours of it being flagged.
“The fact that most Canadians see this as a problem, is all the more reason why our government needs to make online hate speech regulation a policy priority,” Hashim said.
Fulfilling a pledge
In 2019, the federal Liberals pledged to regulate social media platforms to stop the spread of violent extremism online, as well as other harms like child exploitation. Now, Heritage Minister Steven Guilbeault and Justice Minister David Lametti are reportedly close to finalizing a series of measures they will then present to cabinet. If approved, the measures could be unveiled in Parliament in the next few months.
While the specifics are still largely unknown, the measures would likely include large fines for companies failing to remove flagged hateful content and the creation of a new regulator.
Canada may also follow the lead of other countries with new laws already on the books. In Germany, social media companies with over two million users must remove illegal content within 24 hours of it being flagged, or face large fines, among other measures.
But the German model, when first passed in 2017, faced criticisms.
“Requiring companies to make those decisions quickly or face serious penalties, I think we run a risk of overreach,” said Cara Zwibel, a lawyer and director of the fundamental freedoms program at the Canadian Civil Liberties Association. “A lot more is going to be taken down with them erring on the side of caution than realistically should be.”
Expressing skepticism about both industry self-regulation and government rules, Zwibel noted that this could instead be an opportunity to re-examine existing legal restrictions around expression and adequately enforce them to fill the gaps. She also suggested requiring social media companies to be transparent about their policies and algorithms.
But rather than blanket opposition to social media regulation, Zwibel said the CCLA’s response would depend on the details of the upcoming legislation and urged the government to include safeguards.
“There need to be due process mechanisms, ways for people to appeal decisions, ways to make sure that the decisions are made by truly independent and impartial individuals or bodies,” she said.
‘A good springboard’
These suggestions are also reflected in a Jan. 13 open letter on how to hold social media platforms accountable.
Among a number of measures, it recommends creating “an independent oversight body with a mandate that includes enforcing and auditing online hate regulations, updating definitions of prohibited content, and hearing significant appeals.”
Chan says his organization, one of over 30 that endorsed the open letter, attributed the need for an oversight and appeal body to the lessons learned from the German model.
He also hopes that community groups like his would have seats on the oversight body. He believes this inclusion — rather than leaving the entire decision-making power to the government — would increase the process’s legitimacy, safeguard dissent and protect against systemic biases.
“There will still be difficult cases, there will still be difficult questions, but that's why there's this appeal body,” he said, reiterating the need to tackle online hate as soon as possible rather than waiting years for a better solution.
“I'm not saying it would be perfect right off the bat. But over time it will be clearer, and I just hope that the public would be willing to give this process a chance because we can see the devastating effects that unregulated social media can [have].”
Chan pointed out that social media regulation is ultimately just a start, rather than a silver bullet for fighting hate.
“Social media, the internet and hate speech provide a good springboard because it's flashy — but the problem goes much deeper,” he said.