top of page
  • MaryGrace Lerin

YouTube Explains Its Misinformation Policing Strategy and the Obstacles to Effective Action

The issue about misinformation on social media and how it should be policed is extremely complex, with no easy answers. Removing plainly erroneous reports appears to be the most reasonable and effective step - but this isn't always the case, and going too far the other way and removing too much might hinder free speech and useful debate.


Both approaches have drawbacks, and recently, YouTube's Chief Product Officer Neal Mohan shared his thoughts on the topic, as well as how the company plans to balance its response to misinformation with the need to maintain an open platform for all users.

To begin, in addressing the crucial topic of the moment, medical misinformation, Mohan observes that YouTube has removed over a million videos relating to coronavirus material since February 2020, as well as those advocating fake remedies or assertions that the pandemic is a hoax.


"In the midst of a global pandemic, everyone should be armed with absolutely the best information available to keep themselves and their families safe."


Nonetheless, YouTube has aided in the propagation of a substantial quantity of COVID misinformation. For example, in May, a controversial anti-vax video titled 'Plandemic' was viewed over 7 million times on YouTube before getting removed.


The difficulty for YouTube in this regard, as it is for Facebook, is scale - with so many users active on the network all the time, it's impossible for YouTube to respond quickly enough to capture everything in a timely manner, and even a tiny delay in enforcement may lead to millions more views and a much larger consequence.


According to Mohan, the majority of the 10 million videos removed by the site for Guideline breaches each quarter do not even reach 10 views. However, they are averages, and situations like 'Plandemic' will inevitably sneak through the gaps, as Mohan admits.


"Speedy removals will always be important but we know they’re not nearly enough. Instead, it’s how we also treat all the content we’re leaving up on YouTube that gives us the best path forward."


Another aspect of YouTube's approach, according to Mohan, is making sure that material from credible sources is prioritized in the app's search and discovery components, while it subsequently works to minimize the reach of less credible providers.


"When individuals search for news or information, they now get results that are optimized for quality rather than how sensational the item may be."


Which is the correct course to take - optimizing for interaction appears to be a dangerous one in this regard. However, the modern media ecosystem can also obscure this, with organizations driven to publish more contentious, emotionally charged content to attract more clicks.


We witnessed this earlier this week, when Facebook analytics revealed that this piece from The Chicago Tribune received 54 million views just through Facebook interaction in the first quarter of this year.


The headline is deceptive because the doctor died from circumstances unrelated to the vaccine. However, you can assume how this would have bolstered anti-vax groups across The Social Network - and some have responded by claiming that the fault in this case was not Facebook's systems, which aided the amplification of the post, but The Chicago Tribune itself for releasing a clearly misleading headline.


Which is correct, but every publisher understand what promotes Facebook interaction - and this case demonstrates that. Emotional, contentious headlines that encourage participation in the form of likes, shares, and comments work best for increasing Facebook reach and referral traffic. The Tribune received 54 million views from a single article, highlighting a major flaw in the media incentive system.


It also emphasizes the reality that even 'reputable' channels can spread misinformation and content that drives harmful movements, implying that YouTube's concentration on sharing content from trusted sources isn't necessarily a fix for such issues.


Which Mohan explains further:


"In many cases, misinformation is not clear-cut. By nature, it evolves constantly and often lacks a primary source to tell us exactly who’s right. Like in the aftermath of an attack, conflicting information can come from all different directions. Crowdsourced tips have even identified the wrong culprit or victims, to devastating effect. In the absence of certainty, should tech companies decide when and where to set boundaries in the murky territory of misinformation? My strong conviction is no."


You can see why Mohan is reluctant to push for further deletions, a solution frequently advocated by outside observers, while Mohan also alludes to the increased interference of tyrannical regimes aiming to silence opposing viewpoints through online censorship.


"We’re seeing disturbing new momentum around governments ordering the takedown of content for political purposes. And I personally believe we’re better off as a society when we can have an open debate. One person’s misinfo is often another person’s deeply held belief, including perspectives that are provocative, potentially offensive, or even in some cases, include information that may not pass a fact checker’s scrutiny."


Again, the answers are ambiguous, and for platforms with the reach of YouTube or Facebook, this is a serious factor that necessitates research and, where possible, action.

However, it will not address all of the problems. Sometimes YouTube leaves stuff up that should be removed, resulting in further potential difficulties with visibility and amplification, while other times it removes content that many believe should have remained. Mohan neither denies nor shirks responsibility for this, and it's important to observe the intricacy put into this debate while attempting to identify the appropriate course of action.


Again, the answers are ambiguous, and for platforms with the reach of YouTube or Facebook, this is a serious factor that necessitates research and, where possible, action.

However, it doesn't resolve all of the problems. Sometimes YouTube leaves stuff up that should be removed, resulting in further potential difficulties with visibility and amplification, while other times it removes content that many believe should have remained. Mohan neither denies nor shirks responsibility for this, and it's important to observe the intricacy put into this debate while attempting to identify the appropriate course of action.


There are some occasions where things are quite evident - for example, COVID-19 falsehoods should be eliminated on the advice of official medical agencies. But it isn't always the case. In fact, judgment calls are made on a platform-by-platform basis far too frequently, when they should not be. The ideal solution would then be a larger, independent oversight group making such decisions in real-time and guiding each platform on their approach.

However, even this could be abused.


As previously stated, there are no easy answers, but it is intriguing to see YouTube's take on the evolving debate.

2 views0 comments

Discover Roundabout's free reporting tool for every social media campaign

Download the app

Apple and Android

apple_google_edited.png
apple_google_edited.png
bottom of page