Home ›› 29 Nov 2021 ›› Opinion

Social media: A business of commodified hate

Sairas Rahman
29 Nov 2021 00:00:00 | Update: 29 Nov 2021 00:10:05
Social media: A business of commodified hate

The Wall Street Journal, in a series of news reports titled “The Facebook Files: A Wall Street Journal Investigation,” shed light on a creeping suspicion that has been plaguing the minds of anyone even slightly familiar with how big tech companies operate.

Published throughout September this year, these articles were based on an analysis of internal Facebook documents such as research reports, online employee discussions and drafts of presentations to senior management.

Initially hiding behind the curtains, data engineer and former Facebook product manager Frances Haugen later stepped forward as the whistleblower, releasing tens of thousands of internal documents to the Securities and Exchange Commission and The Wall Street Journal.

She explained in detail the “misinformation burden,” Instagram’s impact on teenage girls’ mental health, the desperate need for greater transparency, and how the algorithmic feeds drive hate and violence.

Despite the grave allegations and a firestorm of criticism, Facebook posted better-than-expected profits for the third quarter, making $9 billion in profits during the three months to September, up from $7.8 billion last year.

But how does Facebook – a social media giant that also owns WhatsApp and Instagram – commodify hate and violence to make billions in profit? To understand this, we have to take a deeper look into their free-to-use service model.

Algorithm, driven by engagement

Though a complicated concept, the basics of how a social media platform utilises engagement and algorithm systems is pretty easy to understand.

In a layman’s term, “engagement” is how a particular user engages with social media content – which can be posting a status update, photo, video, comment, emojis, and even clicking on a post.

Everything a user does on social media is logged by the platform as an engagement. Using this data, a platform builds comprehensive profiles for every man, woman and child registered into their services for one singular purpose, to sell the data to advertisers for billions in profit.

The more a user engages with social media content, the more they are likely to spend time on that particular platform. This means more revenue for the platform, as these companies show highly personalised advertisements tailored towards individual users.

In today’s world, user data is a highly sought after commodity, and social media platforms are the most effective way to harvest and monetise this data.

Facebook currently has nearly 3 billion users worldwide, and it is not humanly possible for this company to collect, categorise, and monetise such a vast amount of data to fuel their revenue and market growth. And it is here the social media algorithm comes into the equation.

Algorithms are used extensively by all tech giants, including Facebook, Amazon and YouTube.

The leaked documents showed that the algorithms reward engagement. So, as soon as a post receives comments, likes and any other interactions, Facebook spreads it more widely and features it more prominently in feeds, instead of just showing posts in the chronological order.

The engagement-based formula helps sensational content, such as posts that feature rage, hate or misinformation, travel quickly, far and wide.

Hatred, commodified for business

It is in human nature to engage in content that is offensive, hateful, controversial and polarising, which in turn deepens divide among people and sow discontent. Tech giants such as Facebook utilise such negative reactions to keep the users glued to the site.

Facebook does not care whether a user likes or hates a photo, video or status. If they comment or react to a particular content, agreeing or disagreeing with the post counts as the same, and the platform begins to spread this content more widely to further boost engagement.

Polarising content, such as those promoting communal hatred and violence, have a big potential to become viral, and history has shown us that they usually do so. This brings more engagement and advertisement revenue for Facebook.

So, to keep the profits high and the money flowing, the algorithm boosts such content without much oversight from the company, the leaked documents indicate, further alleging that it is in Facebook’s best interest to let the hatred flow unrestricted.

Facebook has failed to show any viable countermeasure for this loophole, which is being exploited by criminals, people with ulterior motives, and certain quarters who seek to destabilise a region to serve their own agenda.

Can we do anything at all?

Facebook, in time and time again, has proved that it cares little about their users’ privacy and security. In April this year, the personal data of over 500 million Facebook users was posted in a low-level hacking forum.

The leak includes the personal information of over 533 million Facebook users from 106 countries, including over 32 million records on users in the US, 11 million on users in the UK, and 6 million on users in India.

It includes their phone numbers, Facebook IDs, full names, locations, birthdates, bios, and, in some cases, email addresses, and security researchers across the globe warned that hackers could use the data to impersonate people and commit fraud.

The deep-rooted problems brought on by social media platforms impact our lives, livelihoods and the world. But unfortunately, there is no straightforward way to tackle the almighty engagement algorithm.

However, as a user, anyone has the power to fight the algorithm by simply not engaging with polarising or controversial content. Low user engagement convinces Facebook to treat a particular content as unpopular, because it is less likely to bring in advertisement revenue.

The users have the power to prevent hateful content in social media from getting viral by simply ignoring it. Without likes, hates, shares, comments and engagement from users, a post will fade into oblivion very quickly before it can do any real harm.

Until strong policies are in place at the state and global level to hold social media platforms accountable for the often-disastrous way they make money, the users themselves have to take up the mantle of social media  moderators.

Remember, ignoring and blocking hateful posts are the fastest ways to neutralise the messages they carry.

 

The writer is a journalist working at The Business Post

×