Home ›› 26 Dec 2021 ›› Biztech

Online content moderation

Can AI help clean up social media?

Thomson Reuters Foundation
26 Dec 2021 00:00:00 | Update: 26 Dec 2021 01:01:50
Can AI help clean up social media?

Two days after it was sued by Rohingya refugees from Myanmar over allegations that it did not take action against hate speech, social media company Meta, formerly known as Facebook, announced a new artificial intelligence system to tackle harmful content.

Machine learning tools have increasingly become the go-to solution for tech firms to police their platforms, but questions have been raised about their accuracy and their potential threat to freedom of speech.

Here is all you need to know about AI and content moderation:

Why are social media firms under fire over content moderation?

The $150 billion Rohingya class-action lawsuit filed this month came at the end of a tumultuous period for social media giants, which have been criticised for failing to effectively tackle hate speech online and increasing polarization.

The complaint argues that calls for violence shared on Facebook contributed to real-world violence against the Rohingya community, which suffered a military crackdown in 2017 that refugees said included mass killings and rape.

The lawsuit followed a series of incidents that have put social media giants under intense scrutiny over their practices, including the killing of 51 people at two mosques in Christchurch, New Zealand in 2019, which was live-streamed by the attacker on Facebook.

In the wake of a deadly Jan. 6 assault on the Capitol, Meta's CEO Mark Zuckerberg and his counterparts at Google and Twitter appeared before U.S. Congress in March to answer questions about extremism and misinformation on their services.

Why are companies turning to ai?

Social media companies have long relied on human moderators and user reports to police their platforms. Meta, for example, has said it has 15,000 content moderators reviewing material from its global users in more than 70 languages.

But the mammoth size of the task and regulatory pressure to remove harmful content quickly have pushed firms to automate the process, said Eliska Pirkova, freedom of expression lead at digital rights group Access Now.

There are "good reasons" to use AI for content moderation, said Mitchell Gordon, a computer science PhD at Stanford University. 

"Platforms rarely have enough human moderators to review all, or even most, content. And when it comes to problematic content, it's often better for everyone's well-being if no human ever has to look at it," Gordon said in emailed comments.

How does ai moderation work?

Like other machine learning tools, AI moderation systems learn to recognise different types of content after being trained on large datasets that have been previously categorised by humans.

Researchers collecting these datasets typically ask several people to look at each piece of content, said Gordon. 

"What they tend to do is take a majority vote and say, 'Well, if most people say this is toxic, we're gonna view it as toxic'," he said.

From Twitter to YouTube to TikTok, AI content moderation has become pervasive in the industry in recent years.

In March, Zuckerberg told Congress AI was responsible for taking down more than 90% of content deemed to be against Facebook guidelines.

What are the pitfalls?

Tech experts say one problem with these tools is that algorithms struggle to understand context and subtleties that allow them to discern, for example, satire from hate speech.

"Computers, no matter how sophisticated the algorithm they use, are always essentially stupid," said David Berry, a professor of digital humanities at the University of Sussex in Britain.

"(An algorithm) can only really process what it's been taught and it does so in a very simplistic fashion. So the nuances of human communication ... (are) very rarely captured."

This can result in harmless content being censored and harmful posts remaining online, which has deep ramifications for freedom of expression, said Pirkova of Access Now.

Earlier this year, Instagram and Twitter faced backlash for deleting posts mentioning the possible eviction of Palestinians from East Jerusalem, something the companies blamed on technical errors by their automated moderation systems.

×