Published on

Introducing New Safety Systems on TikTok

Introducing New Safety Systems on TikTok

TikTok is a global community that values creativity and believes in providing a safe and welcoming environment for users to express themselves and be entertained. To ensure this environment, we have developed tools and technology to empower creators and address violations of our Community Guidelines.

In the past year, we have been testing and refining new systems in various markets to identify and remove content that violates our guidelines, as well as notify users of their violations. Today, we are introducing these systems to the US and Canada as part of our ongoing commitment to enhance the safety and integrity of our platform.

Our Safety team, based in the US, is responsible for developing and enforcing policies and safety strategies to protect users in the US and Canada. Similar to other user-generated content platforms, content uploaded to TikTok goes through a technology-based process that identifies and flags potential policy violations for further review by a safety team member. If a violation is confirmed, the video is removed and the creator is notified of the removal and the reason. The creator also has the opportunity to appeal the removal. If no violation is identified, the video is posted and can be viewed by others on TikTok.

In the coming weeks, we will begin using technology to automatically remove certain types of violative content upon upload, in addition to removals confirmed by our Safety team. This automation will be applied to content categories where our technology has the highest accuracy, starting with violations related to minor safety, adult nudity and sexual activities, violent and graphic content, and illegal activities and regulated goods. While no technology can be completely accurate in moderating content, especially when context and nuance are crucial, we will continue to improve the precision of our technology to minimize incorrect removals. Creators will still have the ability to appeal the removal of their videos directly in our app or report potential violations for review.

This update not only aims to improve the overall user experience on TikTok but also supports the well-being of our Safety team by reducing the exposure to distressing videos and allowing them to focus more on areas that require contextual understanding, such as bullying and harassment, misinformation, and hateful behavior. Our Safety team will continue to review reports from the community, content flagged by technology, and appeals, and take appropriate actions to remove violations. It is important to note that mass reporting of content or accounts does not automatically lead to removal or increase the likelihood of removal by our Safety team.

As stated in our Transparency Reports, this technology was initially launched in response to the increased need for safety support during the COVID-19 pandemic. Since then, we have found that the false positive rate for automated removals is 5% and the number of appeals for video removals has remained consistent. We are committed to improving the accuracy of our technology over time.

In addition to the technological advancements, we have also made changes to how we notify users of Community Guidelines violations to increase awareness of our policies and reduce repeat violations. The new system tracks the number and severity of violations a user accumulates and provides notifications accordingly. Users will be informed of the consequences of their violations, which can be found in the Account Updates section of their Inbox. They can also access a record of their violations.

Frequent violations will result in more penalties and notifications throughout the app. The process is as follows:

  • First violation: A warning will be sent through the app, unless the violation falls under a zero-tolerance policy, which will lead to an automatic ban.

  • After the first violation: The account's ability to upload videos, comment, or edit the profile may be suspended for 24 or 48 hours, depending on the severity of the violation and previous violations. Alternatively, the account may be restricted to a view-only experience for 72 hours or up to one week, meaning the account cannot post or engage with content. In cases of repeated violations, users will be notified if their account is at risk of being permanently removed. If the behavior persists, the account will be permanently removed.

Certain violations, such as posting child sexual abuse material, will result in an immediate account removal. We may also block a device to prevent the creation of future accounts.

While we strive for consistency, both technology and human moderation may not always make correct decisions 100% of the time. Therefore, it is crucial for creators to have the ability to appeal the removal of their content or account directly in our app. If a content or account removal is found to be incorrect, it will be reinstated, the penalty will be erased, and it will not impact the account going forward. Violations will expire from a user's record over time.

These systems have been developed with input from our US Content Advisory Council. During the testing phase in the US and Canada over the past few weeks, over 60% of users who received a first warning for violating our guidelines did not have a second violation. We believe that transparent and accessible policies lead to fewer violations, allowing more users to create and be entertained on TikTok.

We value the time and effort users put into creating content on TikTok, and it is our priority to ensure that our content moderation systems are accurate and consistent. We encourage our community to share their experiences with us so that we can continue to make improvements and maintain a safe and inclusive platform for our global community.