HateAid gGmbH | LICRA | 2022
Months before the French election,
Facebook gives a free pass to far-right hate
In an increasingly heated political climate, Facebook fails to enforce its own content
moderation policies and remove hate posts - including incitement to violence against political
candidates, women and migrants - even after respective content that violates platform’s terms
of service and respective French law has been notified to the platform by users.
Key findings
In 70 % of the cases, Facebook failed
to delete hate comments even after we
notified them to the company through
their flagging system. This included
insults against women and political
candidates (e.g. Je chié dans ta gueule
espèce de salope”) as well as racist
hate speech (e.g. race de bâtards, a
passer au lance flamme”).
94% of notified comments that
Facebook failed to delete (out of 205
comments), were assessed by legal
experts as violating French law.
Facebook also failed to handle user
notifications diligently and
transparently, indicating profound
deficits in Facebook’s notice and action
procedures. Facebook replied within
the 24 hours’ time frame in less than 20
% of the cases.
The hate comments had been online
from 19 to 690 days (431 days on
average) when we reported them,
despite violating Facebook’s
community standards or French law.
2
The findings suggest that the threat of
credible financial sanctions is needed
for Facebook to comply with existing
rules and protect the rights and safety
of its users.
These findings come at a time when
French democratic representatives
receive death threats, raising
important questions about the role and
responsibility of parties in encouraging
healthy public debates, on and offline.
By failing to enforce its own terms of
service consistently, Facebook rewards
the use of inciting content for political
mobilisation and distorts political
competition at the expense of those
actors who “play by the rules”.
These findings also draw attention to
the need for platforms to take more
systemic approaches to regulating
both manifestly illegal and toxic
content online and carefully consider
the issue of the mainstreaming of
hateful content and the tangible
implications it can have in the context
of elections.
This study suggests social media
companies must not only double down
their efforts to comply with their own
moderation policies, they should also
take a cross-harm risk mitigation
perspective when developing their
products, so as not to enable such a
toxic climate.
These findings come just months
before the EU is set to close the
negotiations on the Digital Services
Act that will lay content moderation
rules for Facebook and platforms alike;
they open a serious question about
Facebook’s readiness to comply with
the forthcoming rules and highlight a
need for a strong enforcement regime.
Data collection
From the dataset of 2 412 114 public
Facebook comments collected by researchers,
we selected 280 highly toxic comments
drawing on the Perspective API
1
. The majority
of comments were found below posts
associated with the far-right groupings in
France (see Figure 1). We assess all of them to
be in breach of Facebook’s own community
standards
2
, or illegal under the French law.
1
https://www.perspectiveapi.com/
2
https://transparency.fb.com/policies/community-standards/
FIGURE 1: DISTRIBUTION OF COMMENTS IN RELATION TO FAR-
RIGHT GROUPS
3
Monitoring results
Selected 280 comments were reported through the official Facebook reporting mechanism.
88 (31.43%) of the 280 comments reported were deleted after the first day of reporting. After
one day Facebook deleted two more comments.
However, on the fourth day of monitoring five of the
deleted comments were restored. On the fifth day,
Facebook restored one more comment. Since the fifth
day, there were no more changes. 193 (94%) of the 205
comments that were not deleted by Facebook, have been
assessed as violating French law by legal experts.
To summarise: after a week of monitoring, only 84
comments (30.0%) containing highly toxic hate speech
were deleted.
3
These 84 deleted comments had already been online for more than a year
(approximately 450 days).
Notice and action procedure
Although Facebook claims that the company will update the notifier within 24 hours of
receiving the notification, they failed to reply on time in 81.78% of the cases. Facebook had
not even created most of the "tickets": we received 60 tickets for 280 reported comments, and
only 51 of them were replied to.
4
Meanwhile, Facebook removed 84 comments, meaning that
Facebook failed to inform of their decision to remove a reported comment in 33 cases.
We received three types of replies from Facebook:
1. Facebook agreed to delete the comment, referring to the Community Standards (36
replies), as illustrated in figure 2:
FIGURE 2: RESPONSE TO THE REPORT
3
In the German Report, 50% of the reported comments were removed in 24 hours. This percentage just slightly
fluctuated during the whole monitoring period (one week).
4
In the German Report, 20.43% of reports have been left without reply.
4
2. Facebook did not agree to delete the comment, referring to the Community
Standards (12 replies), as illustrated in figure 3:
FIGURE 3: RESPONSE TO THE REPORT
3. Facebook did not agree to delete the comment, referring to the Technology (3
replies), as illustrated in figure 4:
In this specific case it is not clear whether any
human oversight was involved in making the
decision, or the decision was solely made based on
the “technology” that is essentially an artificial
intelligence. It should draw further attention on
quality and human oversight in the content
moderation to prevent negative effects on
freedoms and rights of users. Human oversight in
all steps of the automated process is essential to
provide a safety net for the rights of affected
users
5
.
5
Llansó , van Hoboken, Leerssen, Harambam,Artificial Intelligence, Content Moderation, and Freedom of
Expression,” Transatlantic Working Group, 2020: https://cdn.annenbergpublicpolicycenter.org/wp-
content/uploads/2020/06/Artificial_Intelligence_TWG_Llanso_Feb_2020.pdf
FIGURE 4: RESPONSE TO THE REPORT
5
Examples of comments that Facebook deleted after reporting
Examples of comments that Facebook did not delete after reporting
6
Examples of comments that were restored
7
Commentary
Recommendations from HateAid and LICRA to the EU lawmakers
on the Digital Services Act following the findings of the report
I. Give all users a right to complain about wrongful content decisions
made by online platforms
In the fast-paced online traffic, where 309 million people in Europe use Facebook daily
6
, it is
expected that errors in the content moderation will happen. Often these errors have adverse
effects on individuals and democratic events like elections and overall public discourse.
6
“Meta Earnings Presentation Q4, 2021”, Meta 2021,
https://s21.q4cdn.com/399680738/files/doc_financials/2021/q4/Q4-2021_Earnings-Presentation-Final.pdf
8
Users whose notices have been rejected by online platforms, should have a right to a second
assessment through an internal complaint mechanism to be able to challenge wrongful
platform decisions, as highlighted in the finding of this experiment.
Furthermore, wrongful content decisions are often made due to insufficient staffing of human
content moderators, lack of moderator training, and/or lack of moderators who are proficient
in the variety of languages used. It is important to ensure that platforms provide details of the
human resources they have in place for content moderation in a public annual report.
II. Don’t grant a free pass to online platforms to leave unlawful abuse
online
In reality, what motivates the platform to delete the notified unlawful piece of content through
official reporting mechanisms, be it racist hate speech or incitement to violence, is a fear of
being held accountable. However, law-makers risk giving a free pass to online platforms to
leave unlawful content online with no accountability. Policymakers should ensure that all
notices are thoroughly assessed by the online platforms, without lowering the standard for
assessment. Otherwise, they risk enabling a free flow of unlawful hate speech and lowering
the bar for already under-resourced content moderation systems and practices, that in the
case of Facebook, have already been criticised by international organisations and civil society
groups for contributing to real-life violence against ethnic and religious groups in Myanmar
and India. The latter is the biggest market in the world where Facebook operates.
III. Provide users with an effective help-line from authorities and
online platforms
Users are often left alone when dealing with online violence on social media. Victims
describe a sense of helplessness and isolation. The current Russian invasion in Ukraine has
shown the platforms’ ability to react, mobilise and assign resources when under political
pressure. We need a regulation that would mandate the necessary support on a day-to-day
basis:
Enable authorities to help users whose rights are violated by requesting platforms to
remove or suspend access to the illegal content in question;
Online platforms should establish contact points for consumers that should not only
rely on automated means of communication, and be available in one of the official
languages of each Member State.
In order to ensure effective communication and enforcement of rules towards
platforms there should be a point of contact in every Member State, accessible for
users and authorities. This point of contact should be able to receive notifications as
well as documents including those initiating proceedings against the platform in a
legally binding way. This would lower the threshold for victims of online violence to
defend themselves in front of a court.
9
IV. Be realistic in obligations for NGO trusted flaggers
An effective system of trusted flagging heavily relies on the civil society - often publicly or
donor funded NGOs, like HateAid and LICRA, that have the best incentives to become a trusted
flagger and do not receive additional funds for doing this job. It is important to not overburden
NGOs with red tape, too many reporting obligations that require expensive technical
equipment and human resources, as well as too strict requirements to application that may
deter them from becoming a trusted flagger. Instead, we suggest shifting the burden of
reporting requirements concerning functioning of trusted flaggers from NGOs to online
platforms, who could easily generate this information with a help of a few clicks.
Moreover the independence of authorities that award trusted flagger status needs to be
guaranteed and organisations that were denied the status should have access to an appeal
procedure.
V. Establish enforceable risk assessment and mitigation
Similarly, as a car would not enter the market without certification and tests, tech companies
should assess and address the systemic risks and run assessments before the products and
features of their systems, including algorithms, get to users. Documents revealed by
Facebook whistleblower Frances Haugen, gave an insight into the role of algorithmic
amplification in spreading hate speech to drive user-engagement with a proper risk
mitigation, and strong enforcement in place, it should have not happened. Furthermore, the
data provided by the platforms to conduct the risk assessment should be independently
verified.
VI. Enable NGOs to do public interest research on Tech
Civil Society has been at the forefront of defending citizen’s interests, exposing rights’
breaches and demanding accountability from Tech companies for decades. We ask
lawmakers to acknowledge this crucial role of the Civil Society by widening platform data
access for vetted NGOs, associations, and not-for-profit bodies. NGOs should be given a
chance to obtain the platform data of societal importance to carry out research that benefits
the society.
10
About HateAid
HateAid gGmbH was initiated in 2018. We are the first organization in Germany to offer
protection from digital violence to those affected and at the same time to support effective
sanctioning of the perpetrators. Moreover, we create social awareness of the destructive
effects of digital hatred on our democracy. HateAid’s aim is to relieve the burden of the victims
of attacks, enforce their rights, deter the perpetrators, and overall strengthen our democracy
and society. As part of the Landecker Digital Justice Movement, HateAid advocates for more
platform responsibility on social media.
About LICRA
The International League Against Racism and Antisemitism (LICRA) was initiated in 1927, it is
an INGO that has the participatory status at the Council of Europe. LICRA is an organisation
combatting racism, antisemitism, xenophobia and other forms of discrimination. LICRA is
profoundly attached to the values of freedom, equality, fraternity and is promoting the ideal of
universalism. Its actions are based on a network of volunteers present in Europe and
especially in France. LICRA is a member of the Conference of International Non-Governmental
Organisations of the Council of Europe, in which she is presiding the “Artificial Intelligence and
Human Rights” committee. LICRA has been very active in the Steering Committee on Anti-
Discrimination, Diversity and Inclusion (CDADI) and in the Committee of Experts on Combating
Hate Speech (ADI/MSI-DIS) since their creation.