A number of Australian Muslims have received an automated response from Facebook regarding reports of hate speech. While the posts and content reported very clearly contains hate speech that breaches the platform’s community standards, Facebook’s automated response system seems to think otherwise.
A team of Australian social scientists have received funding through Facebook’s content policy research awards, and have decided to use said funding to investigate hate speech on LGBTQI+ Facebook pages in India, Myanmar, Indonesia, the Philippines, and Australia.
There were three aspects of hate speech regulation studied over 18 months. Firstly, hate speech law in the 5 countries was mapped in order to understand the legal implications of hate speech. Moreover, Facebook’s definition of “hate speech” was also studied and all recognized forms and contexts were noted.
In addition, Facebook’s content regulation terms were also noted and staff were asked how the company’s policies and procedures were used to understand and identify any forms of hate on the platform.
In place of testing the efficiency of the in-house moderators through studying datasets, the group instead chose to capture posts and comments from each country’s top three LGBTQI+ public Facebook pages. To be more specific, the group captured harmful posts that had been missed by the platform’s AI filters and human moderators.
In order to try and improve experiences and reduce abuse, admins of the various LGBTQI+ pages were interviewed and, according to their testimonies, they were let down. This was due to Facebook often rejecting reports of hate speech, even when various posts had clearly breached the community standards. Moreover, some removed messages and posts that were taken down were re-posted on appeal.
According to most of the page admins, Facebook’s “flagging” process was not as effective as it should be and rarely ever resulted in permanent removal of harmful content. Admins of the various pages wanted to consult with Facebook in order for them to get a better idea of what type of abuse and hateful content is actually distributed on Facebook
Facebook in Asia has supposedly long had a problem with how they have dealt with hate speech. One example is that while some Hindu extremists have indeed been banned, their Facebook pages have not been taken down and have simply been allowed to continue existing online.
However, during this recent study, it can be happily noted that Facebook has expanded its definition of hate speech and now captures and punishes a much wider range of harmful behavior on its platform. Moreover, it is now able to recognize what kind of content may trigger violence and cause harm to a multitude of people.
It should be noted that, within the five countries that have been focused on, hate speech is rarely legally prohibited and isn’t really given any sanctions. While there are some legal regulations such as cybersecurity or religious tolerance that could be used to counter and act against hate speech, most tend to simply act as suppressants for political dissent and show no real effects towards lessening hate speech.
It was concluded by the researchers that Facebook does not have trouble defining hate, but instead finds difficulty in identifying certain types of hate. These types of hate tend to be ones written in minor languages or regional dialects. This is due to the fact that its AI system is not able to translate regional dialects as accurately, and Facebook has failed to provide training documents and materials to its human moderators. Moreover, Facebook also shows errors in how its system responds to the various reports generated by users ‘flagging’ hate content.
In the Philippines and Indonesia, LGBTQI+ groups tend to be more exposed to discrimination and intimidation. Muslims and other users on the platform seem to receive more threats of death, stoning, or beheading.
In Indian groups, Facebook’s moderating systems failed to capture vomiting emojis responding to gay wedding photos. Moreover, a number of harmful content reports were also rejected despite clearly violating community guidelines.
On the other hand, Australia showed virtually no signs of hate speech. This could mean that either there is less hate speech posted in the country, or the English language moderation is much more effective.
Similar to Australia, Myanmar’s LGBTQI+ groups were exposed to much less hate speech. But this is also due to the fact that Facebook has been working to reduce harmful content, as it was used to persecute the Rohingya Muslim minority. Moreover, Myanmar does not have as many issues with gender diversity when compared to India, Indonesia, and the Philippines.
While Facebook has taken some very important steps towards monitoring and filtering hate speech, people are concerned that the pandemic has led to the platform becoming much more reliant on AI and machine moderation. As of now, Facebook’s AI can only identify hate in about 50 languages, even though there are thousands used on the platform every day.
Based on the study, a number of problems and recommendations have been outlined and identified. The group of researchers has urged the company to remain in contact with the persecuted groups in various regions in order to understand and have a better grasp on hate in local contexts and languages.
Moreover, Facebook must also increase the number of specialists and in-house moderators employed. These moderators must also be equipped with the skills to understand the various dialects available within the various regions.
Similar to Facebook’s efforts in Europe, the company must also help publicize trusted partner channels in order to provide a more visible, official hate speech reporting organization.
In addition, it would also be very helpful if Facebook could cooperate with governments and NGOs to set up Asian regional hate speech monitoring trials to further regulate and remove the hate speech content on Facebook.
All in all, the company should continue to improve its hate speech regulation, as it is far from perfect.