As part of an expansion of their efforts to combat extremism and harmful hate speech, Facebook, Twitter, Google, and Microsoft will now identify a variety of white supremacist groups and far-right militias as legitimate terrorist organizations. It's the first time their efforts have been expanded to include domestic organizations. Various domestic collectives will now be scrutinized more closely as part of a new update to the Global Internet Forum to Counter Terrorism (GIFCT) database, which names groups of dangerous organizations of emphasis for each platform.
According to Reuters, until now, the GIFCT database has only included films and photos from terrorist organizations on the UN's list [...] In the coming months, the group will add attacker manifestos - which are frequently shared by sympathizers after white nationalist atrocities - as well as other publications and connections reported by the UN's Tech Against Terrorism project. It will include URLs and PDFs from other groups, including the Proud Boys, the Three Percenters, and neo-Nazis, as well as lists from intelligence-sharing agency Five Eyes.
Many of these organizations have already been banned or restricted by key platforms, with Twitter, Facebook, and YouTube all taking efforts in the last two years to limit the reach of numerous US-based organizations. The major platforms had recognized the potential threat posed by local groups like The Proud Boys and how they can use social media networks to recruit and amplify their agenda before the Capitol Riots earlier this year, but even before that, the major platforms had recognized the potential threat posed by local groups like The Proud Boys and how they can use social media networks to recruit and amplify their agenda.
The Capitol Riots, on the other hand, were the final straw, creating a serious threat of large-scale political violence. Of course, Facebook has been accused of instigating similar upheavals in other countries, with varying degrees of accountability and repercussions. Localized instances, on the other hand, will understandably receive more attention, and Facebook, which has been identified as a significant facilitator of far-right extremism in recent years, clearly needs to do more on this front.
So, will this assist in the improvement of the situation?
Of course, it's impossible to say, as some detrimental, hazardous movements gain traction on social networks owing to their controversial character, sparking more user response and conversation, which then magnifies the same to even more people.
That's partly due to social platform algorithms failing to magnify material that stimulates engagement and keeps users in each app, but it's also due to human nature, with more shocking, sensational, and emotionally charged news and messages attracting more attention. People crave the endorphin high that comes with receiving Likes and comments on their postings, and the easiest way to acquire that feeling is to push the boundaries. It's no surprise that more extreme viewpoints have been able to gain traction online, as bland updates won't get much attention. However, taking a controversial stance can amplify your voice, and with everyone looking to be heard among the growing sea of social media voices, it's no surprise that more extreme viewpoints have been able to gain traction online.
Suppressing the larger organizations that are behind such things should have some impact - and they should surely not be allowed to proliferate, as we've seen what may happen. However, the broader issues of internet extremism remain a major concern that will require continued investigation.