With COVID-19 misinformation continuing to circulate through social media platforms, Twitter has announced some new measures to help slow the spread of falsehoods, including a new label to alert users to potentially harmful COVID-19 misinformation.
As explained by Twitter:
"During active conversations about disputed issues, it can be helpful to see additional context from trusted sources. Earlier this year, we introduced a new label for Tweets containing synthetic and manipulated media. Similar labels will now appear on Tweets containing potentially harmful, misleading information related to COVID-19. This will also apply to Tweets sent before today."
The tags do indeed look similar to Twitter's 'Manipulated media' alerts, which the platform added back in February.
But interestingly, not long after that release, Twitter also noted that many users said that the 'Manipulated media' tag was too small and didn't stand out enough to be overly effective. In response to which, Twitter noted that it would go back to the drawing board and update the tags.
As part of a new rule, we may add a “Manipulated media” label to videos and photos that've been edited and deceptively shared. We heard the label isn't noticeable, so we're working to make it easier to see on your timeline and visible if you tap into a Tweet or go to a profile. https://t.co/EgNJJNXWjU — Twitter Support (@TwitterSupport) March 19, 2020
Yet, it's now chosen to replicate these exact same alerts with its new COVID-19 pointers.
I mean, it's still likely somewhat effective, and it's definitely better than letting harmful information just go unchecked. But it seems like Twitter could do more here - either via a more prominent tag, as noted, or maybe, just removing these tweets entirely, as they've been identified as sharing potentially harmful misinformation.
Of course, Twitter would prefer not to remove tweets at all, as it would rather facilitate conversation where possible. But in this instance, it seems like it could take that extra step.
In order for a tweet to be identified and tagged in this way, Twitter says that:
"Our teams are using, and improving on, internal systems to proactively monitor content related to COVID-19. Additionally, we’ll continue to rely on trusted partners to identify content that is likely to result in offline harm."
So Twitter has identified and vetted these tweets, and deemed them to be "potentially harmful or misleading". But it's leaving them up, with a label? Seems like the threshold for tolerance should already have been crossed by this point.
Still, again, the labels will hopefully prompt more people to reconsider before viewing and/or sharing such content - while Twitter has also noted that it will look to add more intrusive warnings on especially harmful misinformation.
This second variation would be a lot more effective, and should likely be applied to all tweets identified in this way - if, indeed, Twitter isn't planning to simply remove such tweets completely.
So what can make a tweet cross the line into this more 'extreme' warning territory?
Twitter says that it will look to take action based on three distinct categories:
Misleading information - Statements or assertions that have been confirmed to be false or misleading by subject-matter experts, such as public health authorities.
Disputed claims - Statements or assertions in which the accuracy, truthfulness, or credibility of the claim is contested or unknown.
Unverified claims - Information (which could be true or false) that is unconfirmed at the time it is shared.
Those variants are weighed against a matrix which will determine Twitter's enforcement action.
What, exactly, qualifies something as 'severe' misleading information is not clear, but there is, at least, a threshold of tolerance where Twitter will remove such. It still seems somewhat problematic - but at the same time, I respect that Twitter's dealing with a wide variety of potential violations in this respect, and blanket rulings are not always a viable approach.
But it is a significant area of concern. Information is the key weapon in the battle against the spread of COVID-19, as we need people to heed the official warnings in order to limit exposure, and reduce the potential for viral contamination within communities. Holding protest rallies due to distrust in that official information is the opposite of the action required, and such dissent is further spurred by misinformation campaigns flowing through social networks.
Add to this fake cures, which lead to people misunderstanding the risks, or dangerous counter-movements that could slow the take-up of a vaccine - like the recent 'Plandemic' anti-vax documentary, which has now been banned on every major social network - and it's clear that action needs to be taken to halt unfounded claims, and stop people being persuaded by anti-establishment commentary.
And that's only going to get worse as time goes on. As the numbers of active cases fall, the calls to re-open cities are only going to get louder - and as that happens, people will also be more susceptible to conspiracy theories and commentary that aligns with their desires. As such, all social networks need to take more action on this front.
Twitter is moving ahead, which is good, but it seems like more can be done - and should be considered, given the stakes.
You can read about Twitter's new COVID-19 misinformation measures here.