top of page
Writer's pictureRoundabout team

Facebook is researching how much negativity is actually triggered by misinterpretation

Amid all the angst and disagreement online, which has arguably lead to more societal division, and tribalism within communities, how much of that negativity is actually triggered by misinterpretation?


People get angry at each other's tweets, comments and posts all the time, but is that anger justified? How often do you post something, only to have others read it the wrong way, and respond more aggressively than you might have expected?


This has always been a challenge in text-based communication - without visual cues and other signals, sometimes, context gets lost in translation. It's also particularly the case with sarcasm - for example, comedy writer Megan Amram recently tweeted this:


I'm an anti vaxxer which is why I think we should just all get herd immunity by exposing ourselves to the virus to get antibodies, ideally a weakened version of the virus, in some sort of doctor's office or pharmacy

— Megan Amram (@meganamram) April 11, 2020


Which is a joke, from a comedian. Yet, many missed the point:


Those replies are funny in themselves, but they also underline the point - things can, and are, often misinterpreted online. And you only have to look at the engagement numbers on each of these replies to get an idea of the potential impact of such.


Are all those likes in support of the commentor or the original tweet creator? Could these replies actually be inspiring more angst within these user groups?


To get a better idea of the potential impacts of misinterpretation online, Facebook recently conducted a study of more than 16,000 Facebook users, in which it sought to clarify what people intended to communicate with their posts and comments, and how other users subsequently perceived the same.


As explained by Facebook:


"We combined logged data about public comments on Facebook with a survey of over 16,000 people about their intentions in writing these comments or about their perceptions of comments that others had written. Unlike previous studies of online discussions that have largely relied on third-party labels to quantify properties such as sentiment and subjectivity, our approach also directly captures what the speakers actually intended when writing their comments. In particular, our analysis focuses on judgments of whether a comment is stating a fact or an opinion, since these concepts were shown to be often confused."


Facebook's specific intention was to find out if basic misinterpretation like this can lead to increased anger online.


For example, if I was to say that "5G radiation does not accelerate the spread of COVID-19", that could be perceived by some as my opinion, and may spark more anger towards me in response. But if I re-framed the same as "Research has shown that 5G radiation does not accelerate the spread of COVID-19", that's more likely to be perceived as intended - it's not my opinion, it's based on scientific fact.


Or course, some would still debate the latter in this instance, but the idea is that often people will say things where they intend to share something that they've read or heard, but they state it in a way that seems like they're sharing a personal opinion. Which leads to more angry responses.


"When a comment whose author intended to share a fact is misperceived as sharing an opinion, the subsequent conversation is more likely to derail into uncivil behavior than when the comment is perceived as intended. 


That's fairly logical, but as you can imagine, it's also incredibly common - so how much of our disagreement online is actually caused by this type of misinterpretation?


It's impossible to know for sure, but interestingly, Facebook is looking to use the findings from this research to create new systems which may be able to prompt people when posting such comments in order to stop such misinterpretation from occurring.


"Our results might suggest strategies for promoting healthier interactions on online discussion platforms. For instance, classifiers that predict intentions and perceptions could signal to people when a comment they are writing may be misperceived by others, and suggest strategies (based on the results of our linguistic analysis) for reducing this risk. Still, user studies would be needed to guide the design of such interventions to minimize the risk of unintended negative consequences."


That would be similar to the system that Instagram uses to alert people to comments which could be perceived as offensive.


Maybe, Facebook could build a system that alerts people to comments they're about to make that will likely be misinterpreted, based on language signals as identified by this study. That could actually have a significant influence on the health of online discourse - if you could reduce misunderstanding, maybe the broader online eco-system would be less prone to angry, personal responses, which then spark pile-ons, amplifying the same.


Essentially, the findings here could help to improve the civility of online discourse. Which is a lofty goal, for sure, but given the signifcant influence of such interactions, particularly with social platform algorithms being driven by engagement, which means posts that spark debate ane argument end up being seen by even more people.


Given the elements at play, this is an interesting and important area of discussion. Whether it leads to practical solutions is another thing, but the findings show that it's clearly worthy of further examination.


13 views0 comments

Comments


Discover Roundabout's free reporting tool for every social media campaign

Download the app

Apple and Android

apple_google_edited.png
apple_google_edited.png
bottom of page