Home ›› 24 Feb 2022 ›› Opinion

Social media’s coronavirus challenge

Mark Scott
24 Feb 2022 00:00:00 | Update: 24 Feb 2022 01:08:06
Social media’s coronavirus challenge

The Covid-19 pandemic changed everything — even for social media giants like Facebook and Twitter. Over the last 18 months, as the global death toll surged to more than 4 million people, these tech companies, which once considered themselves neutral platforms for free speech, took an increasingly hands-on role in policing what users said about public health. They removed millions of posts that spread online falsehoods. They censored global leaders that championed Covid-19 misinformation. They promoted official health advice about vaccines to billions worldwide.

In short, pressed by the global public health emergency, social media platforms became arbiters of information. Now, as the world stumbles toward a new post-COVID reality, they’re quickly realizing they’ve bitten off far more than they can chew.

“Dealing with Covid really showed the world that they can act decisively when they need to,” said Philip Howard, director of Oxford University’s program on democracy and technology.

Having witnessed how tech companies have the expertise, and willingness, to monitor, track and remove potentially harmful content, policymakers in Brussels, Washington and elsewhere are heaping pressure on platforms to do even more. That includes everything from removing reams of Covid-19 social media posts to opening up companies› content algorithms to greater public scrutiny.

With public anger over social media content growing, governments are also demanding these firms apply similar restrictions to other hot-button topics where divisive and often false posts can also cause wide-ranging damage, such as elections, far-right extremism and climate change. “For me, the next big crisis question is over climate change where the scientific consensus is just as strong as the public health consensus around Covid,” Howard added.

This has put some of Silicon Valley’s biggest names in a bind — one, in many ways, of their own making — over an ever-higher bar for the type of dubious content they have to police across the web. By taking an aggressive, yet incomplete, stance on Covid-19misinformation, social media giants are discovering they have opened a Pandora’s box when it comes to how posts are tracked online, and one that will be almost impossible to close once the current pandemic eventually ebbs into memory.

As the European managing editor for NewsGuard, an analytics firm that tracks digital falsehoods, the former reporter and her team work with the World Health Organization (WHO) to track which coronavirus hoaxes and conspiracy theories are trending. A report last month discovered that typing “Covid” into the search bar of TikTok, the Chinese-owned video-sharing social media service, brought up autocomplete suggestions like “Covid vaccine side effects” and “Covid vaccine magnet.” Another, showed well-entrenched anti-vaccine influencers were broadcasting to tens of thousands of followers on Facebook and Instagram. 

“I feel like I’m saying the same thing over and over again,” Labbe said. “Misinformation is still alive on a lot of these platforms.”

Labbe’s work shows how, a year and half into the global crisis, social media giants are still struggling to sort through what’s true and what’s false — especially when purveyors of junk science wrap their wares in technical-sounding jargon. The fact that leading scientists often disagree doesn’t make it easier.

The flood of falsehoods has piled on the pressure for platforms to act, as there’s growing, albeit fledgling, evidence linking viral misinformation to harmful health outcomes. A recent peer-reviewed paper, for example, discovered a connection between false rumors that drinking concentrated alcohol could kill coronavirus to around 800 people dying from alcohol poisoning.

“The spread of health misinformation matters because it can be dangerous for people’s health, as we’ve seen in this pandemic,” said Aleksandra Kuzmanovic, a social media manager at the WHO. “The wrong advice on how to prevent or treat the virus infection can have harmful effects on people’s health and even cause death.”

Social media companies say they have deleted scores of Covid-19misinformation; removed countless extremist accounts and movements like the QAnon conspiracy theory; and worked with both independent fact-checkers and public health authorities to pepper billions of people with up-to-date information about the global pandemic.

And yet, telling fact from harmful fiction is often easier said than done. Fast-evolving Covid-19science, according to public health experts, makes labeling misleading coronavirus posts more complicated than filtering out terrorist content — even as global leaders like U.S. president Joe Biden call on tech giants to do more to combat the misinformation threat.

Last month, for instance, an analysis that claimed coronavirus vaccines caused two deaths for every three lives they saved was published in a legitimate, peer-reviewed journal. Days later, it was retracted mid outcry, though not before the paper had been shared widely online.

Facebook also recently reversed its policy of barring posts alleging the pandemic was caused by a leak from a Chinese lab — an idea once dismissed as a conspiracy theory — after mainstream experts began to discuss that possibility.

When it comes to other types of dangerous speech, like removing extremist material online, the effort is still very much a work in progress.

×