top of page
Isabel Encinares

TikTok is Working on In-App User Safety

TikTok has been looking to further improve their automatic violation detection tools as they aim to make TikTok a safer place, and ensure that all uploaded content follows set rules and regulations. The new process will ensure that detected content will be removed in its entirety immediately upon upload, in order to guarantee that harmful content will not meet the eyes of the platform's millions of users.



As TikTok has explained, the uploading process currently has a built-in system which scans video content in order to identify and catch any potential policy violations. Once detected by the system, content is then forwarded to a review team, which then notify users whether their content has violated TikTok’s standard policy. While the current process may already seem effective enough, it may prove to show some error due to TikTok’s continuously growing, massive scale.


As of now, TikTok has begun working to try and improve the system to better ensure that harmful and violative content never reaches the eyes of the platform's many users.



According to TikTok:


"Over the next few weeks, we'll begin using technology to automatically remove some types of violative content identified upon upload, in addition to removals confirmed by our Safety team. Automation will be reserved for content categories where our technology has the highest degree of accuracy, starting with violations of our policies on minor safety, adult nudity and sexual activities, violent and graphic content, and illegal activities and regulated goods."


Essentially, TikTok believes that the best way to limit harmful in-app exposure is to immediately remove and block clips from being uploaded. The new system that the company is working on will enforce this belief and, while there may be false positives, they note that their detection system has proven to be amazingly accurate and would rather experience a few false positives as they develop the system, rather than let potential violations and harmful content be distributed on the content.


"We've found that the false positive rate for automated removals is 5% and requests to appeal a video's removal have remained consistent. We hope to continue improving our accuracy over time."


While 5% of billions of uploads per day may still be quite big in raw numbers, the risks of harmful exposure are significant and TikTok’s concern makes quite a bit of sense. Moreover, their interest in fully automating the detection procedure at 5% also seems to be very logical as that is already quite a good number.


Moreover, TikTok believed that there is another important benefit:


"In addition to improving the overall experience on TikTok, we hope this update also supports resiliency within our Safety team by reducing the volume of distressing videos moderators view and enabling them to spend more time in highly contextual and nuanced areas, such as bullying and harassment, misinformation, and hateful behavior."


According to a number of studies and investigations, content moderation can take a heavy toll on moderators and TikTok staff. As such, reduction of the amount of stress moderators are put under is a definite priority that TikTok should have.


In addition to the new system, TikTok’s account violation and reports will also be getting a new look. This is being introduced as TikTok wishes to improve transparency and encourage users to stop pushing the limits of the privacy policy.


As can be seen above, TikTok’s system will display the various violations the user has committed and will also prompt users with numerous warnings and reminders.

The penalties of repeated violations and issues will slowly intensify, starting from initial warning to possible even full bans. However, more serious issues, such as content sexualizing children or content displaying abuse, will immediately result in the removal of accounts or even possibly result in a device being permanently blocked.


The new rules and measures are extremely important as, according to data published by the New York Times last year around a third of TikTok’s younger base is 14 years old or under. This means that, within the app, there is a very high risk of exposure, be it as creators or viewers.


Because of harmful user generated content, TikTok has already faced numerous investigations and temporary bans as young children attempt various challenges seen in the app. One example of these situations is TikTok’s temporary ban after a ten-year-old girl died after attempting to replicate a viral trend.


Cases such as the one mentioned above highlight the increasing need for TikTok and other social media platforms to protect their users from dangerous exposure and harmful content. Hopefully, the new tools help combat violations and prevent harmful content from ever being published.


TikTok has noted that 60% of the people who receive the first warning do not receive as second, which is another reason to believe in the new system.

All in all, while there may be a few false positives that may upset some users, the risks far outweigh the possible minor inconveniences.


Want to learn more about TikTok’s safety updates? Check it out HERE.

6 views0 comments

コメント


Discover Roundabout's free reporting tool for every social media campaign

Download the app

Apple and Android

apple_google_edited.png
apple_google_edited.png
bottom of page