TikTok adds new rules to ban harmful misinformation
With discussion around misinformation on social media set to ramp up yet again as we head towards the 2020 US Presidential Election, rising video app TikTok has this week announced a new revision of its Community Guidelines, which includes a specific section that outlaws the sharing of misinformation within the app.
As explained by TikTok:
"The Community Guidelines we've published today give users far more detail than previous versions. [...] Users will also notice that we've grouped violations into 10 distinct categories, each of which includes an explanation of the rationale and several detailed bullet points to clarify what type of misbehavior would fall into that category. These changes offer clarity around how we define harmful or unsafe content that is not permitted on the platform. It’s important that users have insight into the philosophy behind our moderation decisions and the framework for making such judgements."
The broader aim of these new guidelines, says TikTok, is to "keep this community safe", catering to a range of problematic behaviors and potential issues within the app.
Most of TikTok's Community Guidelines are fairly generic, but on misinformation specifically, TikTok's rules state that:
"We do not permit misinformation that could cause harm to our community or the larger public. While we encourage our users to have respectful conversations about the subjects that matter to them, we remove misinformation that could cause harm to an individual's health or wider public safety. We also remove content distributed by disinformation campaigns."
Categorically, TikTok says that it will remove content that spreads misinformation which:
Incites fear, hate, or prejudice
Could cause harm to an individual's health - such as misleading information about medical treatment
Proliferates hoaxes, phishing attempts, or "manipulated content meant to cause harm"
Misleads community members about elections or other civic processes
TikTok has had rules against misleading content in place for some time, but up till now, they've been mostly focused on scams and creating fake profiles. These new regulations take things significantly further, and could extend to concerning movements, like anti-vax campaigns and political manipulation, which is specifically pointed to in the last point.
It's also interesting to note the mention of "manipulated content meant to cause harm", as TikTok is reportedly working on a new feature which would essentially facilitate deepfakes in the app.
But as noted by Reuters, the guidelines don't explain how TikTok will determine what constitutes “misleading” content, nor do they provide a lot of depth, which leaves some leeway for interpretation in its subsequent enforcement actions.
That could be problematic, especially given the various concerns around TikTok's content moderation practices thus far, and the influence of the Chinese Government over the app's decisions in this respect.
In September last year, for example, TikTok came under fire after internal documents showed that app's team had been instructed to censor videos that mentioned Tiananmen Square, Tibetan independence, or the religious group Falun Gong, which is banned in China. Two months later, another investigation found that TikTok moderators had also been advised to flag content from users who appeared to have autism, Down syndrome or facial disfigurements, with their uploads then distributed to smaller audience subsets in order to limit the impacts of harassment or cyberbullying.
Given the various questionable decisions TikTok has made in regards to content moderation, ambiguity in their new guidelines is not ideal.
TikTok also makes specific note in its announcement that:
"Our global guidelines are the basis of the moderation policies TikTok's regional and country teams localize and implement in accordance with local laws and norms."
So, these are the regulations, unless local regulations are different. Then they could change. Seems slightly confusing.