Forgotten humans-in-the-loop. Labour-oriented critique of the current discussion of algorithmic content moderation

In the past decade, thriving online harassment and hate speech, armies of bots spreading disinformation as a mean of interference in the elections, far-right propaganda, and waves of obscurantism disseminating through the COVID-19 related fake news repeatedly made online platforms’ content moderation the topic on everyone’s lips and newsfeeds. A recent round of discussion was triggered by the suspension of former US president Donald Trump’s Twitter account in January 2021 in the aftermath of Capitol storming. Twitter’s controversial decision to limit interactions, and later hide Trump’s posts, followed by the ultimate suspension of his account, was taken at the board of directors level. However, not every single user on the platform is paid equal attention when it comes to content moderation. More often, due to the large scope of data, machine learning systems are tasked with moderation. Human moderators are only responsible for reviewing the machine decisions in cases when users appeal those decisions or when the machine learning algorithms flag a given case as contentious.

Concerned by the implications algorithmic moderation can have on freedom of expression, David Kaye, a former UN special rapporteur on the promotion and protection of the right to freedoms of opinion and expression, called on online platforms to “[ensure] that any use of automation or artificial intelligence tools [in this particular case, for the enforcement of hate speech rules] involve human-in-the-loop”. Although the watchdog expressed valid concerns about the implications of algorithmic moderation for the freedom of expression, there is a strong case to argue that humans have never been out of the content moderation loop. 

Human-in-the-loop refers to the need for human interaction with machine learning systems in order to improve their performance. Indeed, algorithmic moderation systems cannot function without humans serving it. The machine has to be designed, created, maintained and constantly provided with new training data, which requires a complex human labor supply chain.

Given the increasing relevance of content moderation in public discourse, it is important to adopt a labour-oriented perspective to understand how algorithmic moderation functions. Contrary to popular fallacy that contraposes machine moderation to human moderation, indeed, current moderation presents a mixture of humans and machines. In other words, humans are pretty much “in-the-loop”.

The aggregated supply chain of all the human involvement is visually presented:

Human labour supply chain involved into algorithmic content moderation

Boards of Platforms


Algorithmic moderation starts from platforms boards’ members and employed consultants who decide on the general rules for moderation. These rules are primarily a product of the need to manage external risks. Among risks that have to be managed are pressures from civil society, regulators, and internet gatekeeping companies.

For instance, Facebook’s initial policy was to remove breastfeeding photos. The platform was defending itself by referring to the general no nudity policy which the fully exposed breasts violated. It was only in 2014, when after years of pressure from Free the Nipple activists, Facebook allowed pictures of women “actively engaged in breastfeeding”. Among other achievements of pressure from civil society was tightening of most of the platforms’ policies towards hate speech like misogyny, racism, and explicit threats of rape and violence.

A number of more established civil society groups also propose solutions on how to moderate based on their proposed ethics or derived from the existing norms of national and international law. Intergovernmental organizations like OSCE, Council of Europe, and the United Nations have their own projects dedicated to ensuring the freedom of expression while respecting the existing norms of international law.

The pressure from the regulators most visibly manifested itself in the adoption of the rules and practices aimed at curtailing the spread of terrorist content and fake news. The latter problem, which became widely discussed after the alleged interference of Russia’s government in the US elections, received additional attention due to the biopolitical concerns raised by Covid-19 pandemic.

When it comes to pressure from the gatekeeping companies, the example of the Parler social network is the most graphical. This social network ceased to exist after Amazon stopped providing the platform with cloud computing under the pretext of insufficient moderation, and both major distribution platforms App Store and Google Play, suspended Parler’s apps. In similar fashion, nudity ban on Tumblr, which led to a mass exodus of users from that platform, came after Apple banned Tumblr’s app from the iOS App Store due to reported child pornography. Likewise, Telegram’s CEO Pavel Durov reported that his decision to remove channels disclosing the personal data of law enforcement officers responsible for brutal dispersion of rally participants in Russia was forced by the gatekeeping company: he claimed that Apple did not allow the update for the IOS app to be released until these channels were removed. During the 2021 Russian parliamentary elections, Apple and Google, being squeezed by the Russian government, in their turn demanded Telegram to suspend a chatbot associated with the Smart Voting project run by the allies of jailed politician Alexey Navalny. The chatbot provided recommendations for Russian voters on which candidates to support in order to prevent the representatives of ruling party from getting the mandates

Not every platform CEOs would employ or at least report on using the algorithmic moderation systems as a solution. Clubhouse’s moderation, for example, works in a way that no machine-learning algorithm is employed. The conversations are recorded and stored by Agora, the Shanghai-based company providing the back-end for Clubhouse. In case of a complaint either by the users or a government, the platform could study the recording and pass the verdict.

The decision on whether to manage the aforementioned risks with the help of algorithmic moderation, always lies with board members of the platform. The boards create the rules, the engineers find the ways to impose them – although the border is not well demarcated, given that the CEOs often hold degrees in engineering themselves and might also be directly involved in designing the systems’ architecture.

Engineers


It is engineers who decide which algorithm to introduce for content moderation, design and maintain that algorithm, and seek ways to modernize or replace it.

Engineers choose between two main categories of algorithms, which are commonly both applied to content moderation.

One category of algorithms deals with searching for partial similarity (perceptual hashing) between newly uploaded content and an existing database of inappropriate content. For example, perceptual hashing is effective in preventing the circulation of inappropriate content such as viral videos of mass shootings, extremist texts or copyrighted films and songs. The most well-known example of a perceptual hashing-based algorithm is the Shared Industry Hash Database (SIHD), used by companies like Google, Facebook, Microsoft, LinkedIn, and Reddit. The database was created in 2017, contains terrorism-related content and has been criticized for its lack of transparency.

The second category encompasses algorithms that predict (machine learning) if content is inappropriate. Machine learning technologies like speech recognition and computer vision are effective in classifying user-generated content that infringes on the platforms’ terms of services (ToS). This technology has however drawn criticism for discriminating against certain groups, as in the case of overmoderation of tweets written in Black English. These biases are not generated by the algorithm itself, but are formed due to inappropriately compiled datasets in the training of that algorithm.

Driven by the orders coming from the platforms’ board, engineers constantly seek new datasets to improve the work of their algorithms. These datasets are manually labelled by human moderators, outsourced data flaggers, and regular users.

Human moderators


The main role of human moderators is to review users’ appeals against particular machine-made decisions and to decide in those cases when the level of machine learning algorithm confidence is low. Moderators often work for outsourced companies based in the Global South and the conditions of their labour are a matter of concern of the human rights activists. Besides usually being in economically precarious situations, the moderators suffer huge psychological pressure by dealing with very sensitive content like videos of live streamed suicides on a daily basis.

Moderators’ role is not limited to resolving disputes between the user and platform. If human moderators confirm that the uploaded content violates the ToS of the platform, this content, now verified by the expert, can further augment the dataset used for algorithmic training.

The mainstream practice of human moderation presupposes anonymity of the moderators. The course on the pioneering approach has been taken by Facebook. In their attempt to meet the demand for an increased transparency and improve the company’s legitimacy, Facebook’s board has introduced the system considerably reminding of the constitutional technology of separation of powers employed by the national states. Indeed, the idea of creating a quasi-legal judicial body within Facebook being dedicated to content moderation matters came from Noah Feldman, a professor at Harvard Law School who took part in drafting the interim Iraqi constitution.

In 2020, the platform established the so-called Oversight Board (OB) referred to by commentators as Facebook’s Supreme Court. The OB comprises twenty members “paid six-figure salaries for putting in about fifteen hours a week” among whom are acknowledged human rights activists, journalists, academics, lawyers, as well as former judges and politicians. By October 2021, the OB has adopted 18 decisions, some of which have overturned the initial decision passed by anonymous human moderators or the board itself. Other decisions, the most significant of which is Trump’s account suspension, have been upheld by the OB. In passing its decisions, the OB refers to both the platform’s community guidelines and the international human rights standards, namely the provisions of International Covenant on Civil and Political Rights. Clearly, twenty OB members are unable to review all the cases eligible for appeal so their goal is limited to reviewing the most representative ones, chosen by Facebook, to produce advisory policy recommendations and, supposedly, create the precedents human moderators can refer to in their practice. The company states that their “teams are also reviewing the board’s decisions to determine where else they should apply to content that’s identical or similar”.

Crowdsourced flaggers


Algorithm training sets are usually compiled by crowdsourced flaggers who classify content for a small financial reward, working through platforms such as Amazon’s Mechanical ‘Turk’ or Yandex’s ‘Toloka’. Using the example of Yandex Toloka, flaggers are tasked with classifying images into the following six categories: “pornography”, “violence”, “perversion”, “hinting”, “doesn’t contain porn”, “didn’t open”. As shown below, the left image, taken from the tutorial, is classified as “doesn’t contain porn”, while the other two images are classified as “hinting”. The explanatory signs indicate that the middle image displays “an obvious focus on the genital area” while the right image shows an anatomical depiction of genitals. These classified datasets are most probably used by Yandex to moderate their social media platforms like Messenger and Zen. The latter enjoys relative popularity in the Russian segment of the Web. At the same time, the explicitly norm prescribing manner in which these datasets are compiled serves to illustrate an observation that “the training dataset is a cultural construct, not just a technical one”.

Users


Regular users of online platforms also contribute to the training of algorithms or updating the databases for similarity-searching algorithms through reporting on content they deem inappropriate. While for users themselves reporting is a way to make their voices heard by the platform, for the latter the feedback is valuable as any feedback could be and as an unpaid labour of mapping the training datasets for predictive algorithms. 

Once a sufficient number of users report that a piece of content doesn’t meet the requirements of the platform, the content is sent to the human moderators for further review. If the moderator confirms that the content violates the ToS of the platform, those users have demonstrably contributed to improving the algorithms.

Conclusion


While current mainstream approaches to the analysis of automated moderation systems focus strictly on the technical details of how the algorithms work, the people involved always go unseen. This paper pays tribute to the humans whose labour makes the automated moderation possible but kept lost in the false human-machine dichotomy, when in fact the current practice of content moderation presents an assemblage of humans and machines intertwined.



Ilya Lobanov is an independent researcher from Saint-Petersburg, currently based in Vienna. His interests lie in the areas of political economy of digital capitalism, urban politics, and history of mind.

Read more:

Draft withdrawal a step towards moratorium on biometric surveillance

As an organisation dedicated to digital rights and freedoms, fighting against the use of mass biometric surveillance, we welcome the decision of the Serbian Minister of Interior to withdraw the controversial Draft Law on Internal Affairs.

We call on the authorities to take another step and impose a moratorium on the use of advanced technologies for biometric surveillance and mass processing of citizens’ biometric data. Such a move would be in line with the recommendations of the United Nations and the European Union, as well as of numerous organisations and experts around the world.

We also call on the Ministry and the Government of Serbia to secure a broad public debate in the future law-making process, especially when intending to regulate the use of advanced technologies in our society, so that we could jointly contribute to the quality of laws concerning all Serbian citizens.

SHARE Foundation’s comments on the Draft Law on Internal Affairs

SHARE Foundation’s press release



Read more:

Total surveillance law proposed in Serbia

The public debate on the Draft Law on Internal Affairs has officially introduced into legal procedure provisions for the use of mass biometric surveillance in public spaces in Serbia, advanced technologies equipped with facial recognition software that enable capturing and processing of large amounts of sensitive personal data in real time.

If Serbia adopts the provisions on mass biometric surveillance, it will become the first European country conducting permanent indiscriminate surveillance of citizens in public spaces. Technologies that would thus be made available to the police are extremely intrusive to citizens’ privacy, with potentially drastic consequences for human rights and freedoms, and a profound impact on a democratic society. For that reason, the United Nations and the European Union have already taken a stand against the use of mass biometric surveillance by the police and other security services of the states.

SHARE Foundation has used the opportunity of the Draft Law public debate to submit its legal comments on the provisions regulating mass biometric surveillance in public spaces, demanding from the authorities to declare a moratorium on the use of such technologies and systems in Serbia without delay.

Although modestly publicized, only three weeks long public debate on the disputed Draft Law gathered national and international organizations in a common front against the harmful use of modern technologies. Among others, EDRi, the European network of NGOs, experts, advocates and academics advancing digital rights, reacted. The official letter to the Serbian government and the Ministries of interior and justice states that provisions of the Draft Law allowing the capture, processing and automated analysis of people’s biometric and other sensitive data in public spaces, are incompatible with the European Convention on Human Rights which Serbia ratified in 2004. 

“The Serbian government’s proposal for a new internal affairs law seeks to legalise biometric mass surveillance practices and thus enable intrusion into the private lives of Serbian citizens and residents on an unprecedented scale. Whilst human rights and data protection authorities across the EU and the world are calling to protect people from harmful uses of technology, Serbia is moving in a dangerously different direction”.

Diego Naranjo, EDRi

Gwendoline Delbos-Corfield, a French MEP from the Greens has warned against the use of these intrusive technologies and further restricting the rights of those living in Serbia, emphasizing that these technologies magnify the discrimination that marginalised groups already face in their everyday life. “We oppose this draft law that would allow law enforcement to use biometric mass surveillance in Serbia. It poses a huge threat to fundamental rights and the right to privacy”, said Delbos-Corfield.

“In Serbia, a country that Freedom House rated as only ‘partly free’, we suspect that the government has already begun the deployment of high-resolution Huawei cameras, equipped with facial recognition technology, in the city of Belgrade. If this draft law comes into effect, the government might have a legal basis for the use of biometric mass surveillance and the use of these cameras. Serbia now runs the risk of becoming the first European country to be covered by biometric mass surveillance. We call on the Serbian government to immediately withdraw the articles of this draft law that regulate biometric mass surveillance.”

Gwendoline Delbos-Corfield, MEP, Greens/EFA Group

The disputed provisions stipulate installation of a system of mass biometric surveillance throughout Serbia, without determining the necessity of the proposed measure for all residents of Serbia to be constantly treated as potential criminals by disproportionately invading the privacy of their lives. Of particular concern is the lack of a detailed assessment of the impact that the use of total biometric surveillance can have on vulnerable social groups, but also on journalists, civic activists and other actors in a democratic society.

SHARE Foundation comments on the Draft Law on Internal Affairs



Read more:

The Pegasus Project: what happened and how to protect yourself

More than 180 journalists were discovered in a database of phone numbers designated for potential espionage, thanks to a leak of documents given to the Forbidden Stories journalistic collective and Amnesty International. The choice of targets for surveillance was made by clients of the Israeli company NSO Group, which specialises in the production of spyware that it sells to governments around the world. Its primary product, Pegasus, can compromise a mobile phone, extract all data from it and activate the microphone to record conversations.

In addition to journalists, among the 50,000 people suspected of being targeted by state structures from around the world, there were activists, academics and even top public officials.

Targeted surveillance

Pegasus enables targeted compromitation of mobile phones, by hacking through malicious links or technical vulnerabilities in popular applications. In that way, it is possible to target a predetermined person and confidential information stored on their phone – correspondence with journalistic sources, business and state secrets, information on political plans and actions and the like.

Spyware

Spyware is a type of malware that collects data from an infected system and passes it on, usually to the person who created it. With such malware, passwords, personal data, correspondence, etc. can be collected without authorisation.

Pegasus

Use of software for iOS was discovered in 2016, but is believed to have been in use since 2013. Although NSO Group claims that Pegasus is intended to fight terrorism and international crime, human rights organisations have identified the use of software in authoritarian regimes against civic activists and dissidents, including the assassinated Saudi journalist Jamal Khashoggi.

Who is using it

Among NSO Group customers are primarily state bodies authorised for conducting surveillance and interception of communications: intelligence and security agencies, police services and the military. Although information on 40 unnamed buyer countries is currently available, the Pegasus Project findings indicate that the spyware was used in Hungary, Azerbaijan, Bahrain, Mexico, Morocco, Saudi Arabia, Kazakhstan, Rwanda, India and the United Arab Emirates.

How it infects the device

The software is intended for devices running the Android operating system, as well as some iOS versions, and exploits several different flaws in the system. Infection vectors include link opening, photo apps, the Apple Music app and iMessage, while some require no interaction to run the software (zero-click).

What can it access

With Pegasus, attackers can reportedly gain access to virtually all data stored in the target’s smartphone, such as contents of SMS correspondence, emails and chat apps, photos, videos, address book, calendar data or GPS data. There are also options for remotely activating the phone’s microphone and camera and recording calls.

What can I do

Digital rights falter amid political and social unrest

SHARE Foundation and Balkan Investigative Reporting Network – BIRN, which have been monitoring the state of digital rights and freedoms in Bosnia and Herzegovina, Croatia, Hungary, North Macedonia, Romania and Serbia since 2019, published a report on the violations of human rights and freedoms in the digital environment in the context of social and political unrest. From August 2019 to December 2020, the regional monitoring recorded more than 800 digital rights violations in an interactive online database.

Journalists, civil society activists, officials and the general public have faced vicious attacks – including verbal abuse, trolling, smear campaigns and pressure to retract content – in response to publishing information online. Many of our data were compromised and our privacy increasingly endangered with widespread surveillance, especially during the pandemic. 

BIRN and SHARE took an interdisciplinary approach and looked at the problems from a legal, political, tech and societal aspect in order to show the complexity of cases in which the violations of digital rights and freedoms occur. Most online violations, a total of 375, were related to pressures because of online activities or speech, which includes threats, insults, unfounded accusations, hate speech, discrimination, etc. These issues create an atmosphere dominated by fear and hatred, which makes vulnerable communities such as LGBT+ or migrants subjected to additional attacks. 

The main trends highlighted in the report are:

  • Democratic elections being undermined
  • Public service websites being hacked
  • The provocation and exploitation of social unrest
  • Spreading of conspiracy theories and pseudo-information
  • Online hatred leaving vulnerable people more isolated
  • Tech shortcuts failing to solve complex societal problems

Report findings show the need for legislative, political and social changes in monitored countries – the digital evolution must be viewed as a set of mechanisms and tools which primarily have to serve the needs of the people. The COVID-19 pandemic has proven that an open, free and affordable internet is absolutely essential in times of crisis. Only by insisting on accountability for digital rights breaches and providing education on the risks and possibilities of the digital environment can we hope to create a progressive, open and tolerant society. 

Read more:

Digital rights faltering amid political, social unrest: What now?

The SEE Network of civil society organisations is inviting all interested participants to join the first online discussion and knowledge share on the state of digital rights in Southern and Eastern Europe on July 1.

At the July 1 event (3pm – 4.30pm, CET) BIRN and SHARE Foundation will discuss its annual digital rights report, together with other members of the newly established SEE Network; an online public event will be held at 3pm, where key trends concerning the digital ecosystem will be discussed.

In August 2019, BIRN and SHARE foundation started a unique monitoring process of the state of digital rights in Bosnia and Herzegovina, Croatia, Hungary, North Macedonia, Romania and Serbia, collecting more than 1500 cases of digital rights violations in a public regional database.

In Southern and Eastern Europe, where online disinformation campaigns are endangering guaranteed individual freedoms, and while the decline in internet safety has become a worrying trend, citizens with poor media and digital illiteracy have been left without viable protection mechanisms.

The event will cover some of the most important trends mapped during the monitoring period, such as: the position and threats posed to the vulnerable groups, particularly focusing on gender perspectives; freedom of speech and online hatred; the effects of state propaganda and fake news on the right of citizens to receive accurate information; and (ir)responsibility of big tech companies in the region.

The event will gather representatives of CSOs, experts (both tech, legal and sociological), students, activists, tech enthusiasts and other interested parties in order to open a broad discussion on the growing challenges we face and the steps we need to take to counter the further deterioration of citizens’ digital rights, and press for policy change.

The event aims to provide space for diverse voices to be heard and we are delighted to invite you to join us by filling in this online registration form.

Upon registration you will receive a confirmation email and the agenda.