- MaryGrace Lerin
Facebook Restricts Content Sharing in Ethiopia to Reduce Misinformation, Dissemination, Hate Speech
The previous 'Facebook Files' internal data leak raised a number of issues and concerns, including the remark that content sharing on Facebook is among the most detrimental acts, because the simplicity of exacerbating questionable content by tapping 'Share' greatly increases the number of people doing the same.
Matter of fact, according to one of the most recent revelations given by Facebook whistleblower Frances Haugen, Facebook's own study has found that the 'Share' option is harmful, especially when it comes to shares of shares.
According to Alex Kantrowitz's Big Technology newsletter:
“The report noted that people are four times more likely to see misinformation when they encounter a post via a share of a share - kind of like a retweet of a retweet - compared to a typical photo or link on Facebook. Add a few more shares to the chain, and people are five to ten times more likely to see misinformation. It gets worse in certain countries. In India, people who encounter “deep reshares,” as the researchers call them, are twenty times more likely to see misinformation.”
To put it another way, anything that receives a lot of likes is much more likely to contain misinformation, which makes sense considering how sensational and contentious such statements are.
However, the issue remains as to what Facebook, or Meta, will do about it, with Haugen stating that the business has ignored the results.
Nevertheless, this isn't totally accurate. Recently, Meta published the following note in an update on the steps that were introduced on Facebook in expressly limiting the spread of misinformation and hate speech in Ethiopia ahead of the country's recent elections:
“To address possible viral content, we’re continuing to reduce content that has been shared by a chain of two or more people. We’re also continuing to reduce the distribution of content that our proactive detection technology identifies as likely to violate our policies against hate speech as well as from accounts that have recently and repeatedly posted violating content.”
As a result, Meta is considering enacting post sharing limits, based on its previous findings.
Which is a nice thing to do, and it makes sense given the research. But, since Meta recognizes that shares of shares are a problem that can lead to the rapid amplification of detrimental posts, why not make this a general rule – or, even better, remove the 'Share' option entirely to prevent this type of speedy proliferation?
Evidently, users would still be able to share content if Facebook removed the 'Share' option.
Users would still be able to add article links in their own updates, but because they'd have to make a new post for each, they'd be more inclined to include their own personal opinions.
Users would still be able to react to and "Like" posts, which would boost their connections' and broader networks' exposure through engagement activities.
Users would still be able to comment on posts, which would improve visibility due to the algorithm's goal of showing the most engaging content to as many users as possible.
People could still share posts through message, as demonstrated by this iteration of the Facebook post UI that Facebook tested in 2018, which changed the 'Share' button with a 'Message' button.
So while there would still be opportunities for engaging with information on Facebook, the research reveals that having a simple ‘Share' button can help greatly to the rapid dissemination of dubious claims.
Perhaps removing it, and forcing users to think more about what they're doing, will reduce blind sharing and curb the spread of such posts.
That was the reasoning behind Twitter's removal of straight retweeting as a feature for US users in October of last year, just before the presidential election.
Rather than allowing users to simply and quickly retweet any claim, Twitter defaulted users to using its 'Quote tweet' option, in the hopes of getting people to think a little more thoroughly about what they were sharing, rather than simply re-amplifying information and quotes.
That did have an effect. Quote Tweets were used more frequently after normal retweets were reinstated in December, according to Twitter, "but 45 percent of them were single-word affirmations and 70 percent had less than 25 characters."
In other words, people were a little more cautious in their sharing, but it didn't lead to much more context in the process.
But, on the other hand, perhaps all that is required is for people to take a minute to consider the message for a second, and that may be enough to prevent them from propagating viral misinformation and false claims.
Users opened articles 40 percent more often as a result of the additional barrier with Twitter's pop-up alerts on articles that users attempted to retweet without first accessing the article link and reading the post.
Facebook has recently adopted the same strategy, proving that there is value in this method - and with its own research indicating that shares can be a negative factor, why not simply eliminate the choice to encourage more thought in the process?
Obviously, this would have an impact on publishers, who would experience a decline in referral traffic, and it would also have an impact on total Facebook engagement by limiting the options for post interaction.
Maybe that's why Meta isn't going to do it? Well, it has the information, and it's already using it to avoid potential harm in some scenarios. Meta recognizes that a modification in its sharing procedure could be beneficial.
Why not put limitations in place across the platform?
It would be a significant step, and there are other factors to consider. However, the studies and other evidence demonstrate that Meta is aware that this is a viable option. Surely, doing it would reduce the risk of harm through mindless dissemination, but they don’t seem to be executing it anytime soon.
ความคิดเห็น