Published on

TikTok Releases Q2 Community Guidelines Enforcement Report

TikTok Releases Q2 Community Guidelines Enforcement Report

TikTok has released its Q2 Community Guidelines Enforcement Report, which provides details on the volume and nature of content and accounts that have been removed from TikTok to ensure the safety of their community and the integrity of their platform.

In the report, TikTok reveals that they have removed a total of 81,518,334 videos globally between April and June for violating their Community Guidelines or Terms of Service. This accounts for less than 1% of all videos uploaded on TikTok during that period. The majority of these violative videos (93.0%) were identified and removed within 24 hours of being posted, and 94.1% were removed before a user reported them. It is worth noting that 87.5% of the removed content had zero views, indicating an improvement compared to their previous report.

TikTok has made significant progress in proactively detecting and removing hateful behavior, bullying, and harassment. They have seen improvements in the removal of harassment and bullying videos before any reports were made, as well as hateful behavior videos. These improvements are a result of ongoing enhancements to their systems, which proactively flag hate symbols, words, and other abusive signals for further review by their safety teams.

However, TikTok acknowledges that addressing harassment and hate speech is a complex task that requires careful consideration of context. They provide regular training and guidance to their team on how to differentiate between reappropriation and slurs, or satire and bullying. They have also implemented unconscious bias training for their moderators and hired policy experts in civil rights, equity, and inclusion. TikTok encourages users to report any accounts or content that may violate their Community Guidelines.

In addition to content removal, TikTok empowers users to customize their experience through various tools and resources. This includes effective ways to filter comments, delete or report multiple comments at once, and block accounts in bulk. They have also introduced prompts that encourage users to consider the impact of their words before posting potentially unkind or violative comments. These prompts have already had a positive effect, with nearly 4 in 10 people choosing to withdraw and edit their comments. TikTok will continue to develop and test new interventions to prevent potential abuse.

TikTok is introducing improved mute settings for comments and questions during livestreams to prioritize safety. Hosts or their trusted helpers can temporarily mute unkind viewers for a few seconds or minutes, or for the entire duration of the livestream. If an account is muted, all of that person's comments will also be removed. Hosts already have the ability to turn off comments or limit potentially harmful comments using a keyword filter. These new controls aim to further empower hosts and audiences to have safe and entertaining livestreams.

TikTok reaffirms their commitment to combating antisemitism. They have strengthened their policies and enforcement actions against antisemitic content on TikTok. They also aim to expand their collaboration with NGOs and civil society groups to provide educational resources on the Holocaust and modern-day antisemitism.

For more information on the steps TikTok is taking to protect the safety of their community and the integrity of their platform, please read their Q2 report.