Social media platforms have a problem out of Africa

Many people are watching war scenes in Ukraine on their phones. Social media is full of photos, video clips and satellite images. Satellites have taken pictures of Russian positions and shown the destruction caused by Russian attacks on Ukrainian cities. However, a satellite can take a picture of a particular location only once a day, assuming there is good weather and no clouds. After this the satellite orbits away from the speed.

As we know, the picture painted by online posts, whether on war or otherwise, is not always accurate. Social media is no different in times of peace than it is in times of war. There are thousands of videos and photos being posted from Ukraine daily, but most people only see the handful that get the most ‘likes’ and ‘shares’. In some cases, such as in TikTok, this is because the platform’s algorithms are exceptionally well-crafted to present what its users want to see.

Interestingly, it appears that TikTok may be the social media platform best designed for war because it is a video platform that is also instant. YouTube requires an elaborate video set-up before a clip can be posted to the platform, and a whole host of other social media sites such as Twitter, Meta’s Facebook, Twitter, Instagram and others also have non-video content. One just needs a camera to post a video grab on TikTok, which all smartphones come equipped with today. The platform is easy to use. Short clips can be posted instantly to TikTok, allowing anyone on the ground with a smartphone to ‘crowdsource’ a nearly endless stream of battle footage.

On the other hand, all the social media platforms are doing little to control the violent content being spread. They claim that the job of the artificial intelligence (AI) they use is actually to filter out the hateful content that some people choose to post online. Despite these claims, sweat shops for outsourced work did exist in the US, with a group of low-wage workers watching horrific videos posted online by the darkest of humanity. I have written before in this space that these sweat shops have been documented to drive their workers to the point of psychological breaking while performing their excruciating work. The fact that such sweat shops exist is testament to the fact that the ‘AI can do everything’ approach doesn’t work.

And therein lies the irony behind Big Tech’s Janus-like approach. In 2019, a disturbing report by The Verge (bit.ly/472zNaL) pointed to the dark and nefarious practice of content monitoring in the world of online social networks. Facebook (now Meta) and other social network companies reportedly don’t pay much attention to their employees or contract companies that monitor their massive sites for objectionable content.

That report presented a sinister and disturbing view of the operations of an information technology service provider hired by Meta to monitor content on its platform. Apparently, at least one employee died of a heart attack while on the job. These guardians of our mental health are paid very little and often have to endure audio-visual material that underlines the inhumanity of people towards fellow humans and animals. The work of the censors who must see through all of this is incredible, as it is they who must weed out the horrors that perverted humans have posted online. According to The Verge, these centers were outsourced to IT firm Cognizant, and were located in Tampa, Florida in the US.

I am told that these centers in the US have now been closed, but it appears that Big Tech and its outsourcing providers have now moved this trauma-laden work to a different part of the world. A report in Wired magazine (bit.ly/3pSKwUJ) takes note of this, citing a recent Kenyan court judgement. It said: “A court in Kenya has handed down a landmark ruling against Meta, the owner of Facebook and Instagram. The court ruled that the US tech giant was the ‘true employer’ of hundreds of people employed as moderators on its platforms in Nairobi, sifting through posts and images to filter out violence, hate speech and other shocking content. This means Meta could be sued for labor rights violations in Kenya, even though moderators are technically employed by a third-party contractor. The report also claimed that the magazine had access to TikTok’s internal documents that were leaked to an NGO called Foxglove Legal.

According to these documents, TikTok is tracking court proceedings in Kenya as it has also outsourced the arrangements through Luxembourg-based Majorelle to Kenya and other developing countries in Africa such as Morocco. And these engagements are focused on watching and rejecting videos that are posted online by human moderators.

It is true that some such videos are sometimes valuable, especially for law enforcement agencies, as we have seen recently in India where the perpetrators of the horrific mob violence in Manipur can now be punished. Or against the excessive use of force in policing, as we have seen recently in France. Yet, day after day scouring through videos that depict mankind’s inhumanity towards humans is clearly horrendous for one’s mental health.

War is disgusting, be it in Ukraine or anywhere else. And a human’s mental health is equally fragile, whether the inhumane videos are viewed in America or Africa.

Siddharth Pai is the Co-Founder of Sienna Capital, a Venture Fund Manager.