Art by Carson Vandermade
TikTok accounts are not as harmless as they seem; they may be negatively impacting your mental health. A company insider leaked information about how the TikTok algorithm works after the individual was concerned about the algorithm’s push toward triggering content that potentially leads users to engage in self-harming behaviors.
The documents the TikTok employee sent to The New York Times reveal a key focus of the algorithm — how much time a viewer spends looking at a video. Does the user repeat the video, pause or skip through it?
The algorithm will monitor a TikTok user’s behavior to see which types of videos elicit emotional reactions to encourage the user to remain longer on the platform.
Similarly, a recent Wall Street Journal experiment — in which they created 100 new TikTok bot accounts — found those accounts quickly rabbit holed into more niche, emotionally extreme content. Going into the quagmire of the TikTok algorithm can expose a user to politically and psychologically unhinged content like self-harm videos and people discussing far right or far left conspiracy theories as if they are true.
This disturbing content is dangerous because it presents emotionally triggering material and misinformation that users are better off not seeing.
This TikTok “For You” material was less likely to be moderated according to The Wall Street Journal. It was also much more emotionally intense than the videos the bots first self-selected.
TikTok admitted only 1% of the content it removes is for disinformation, dangerous individuals and organizations, hate speech or inauthentic content. The other 99% of removed content was removed for other issues like nudity and other TikTok priorities.
The company directed moderators to filter out types of videos featuring “ugly, poor or disabled users.” The Guardian reports TikTok instructs moderators remove videos for “abnormal body shape,” “ugly facial looks or facial deformities” or shooting environments looking “shabby and dilapidated”.
Tendencies to keep content which looks physically attractive but features conspiracy theories and other forms of disinformation, combined with the algorithm’s push to show users emotionally intense videos from beautiful looking people, creates a perfect storm of potentially emotionally toxic but visually appealing content.
The “For You” section will suggest some videos that are in line with those previously watched by the viewer, but it will also add in types of content a user might not independently select or self-seek out. TikTok itself claims it selects those videos based on factors such as users’ geographic locations, preferred languages and previous comments, likes and video interests.
However, Wall Street Journal researchers discovered TikTok appears to present extreme content to increase time spent on videos.
TikTok previously claimed it was reviewing many of its videos to ensure they followed the platform’s rules, but as the popularity of the app grew, the number of human moderators did not keep pace with the increased proliferation of content. TikTok also recently announced computer software will monitor more of its United States content as opposed to human moderators.
Computer programs cannot do an adequate job of moderating the complexities of what gets posted. Some of the content is so disgusting even human moderators are quitting because they do not want further exposure it.
Former TikTok moderator Candie Frazier recently announced she was suing for damages because viewing TikTok content allegedly gave her PTSD.
Frazier claimed she reviewed some TikTok videos that contained “extreme and graphic violence” and that content gave her “significant psychological trauma.”
TikTok videos receive less moderation in some languages and countries, and filtering on the platform is becoming increasingly automated. Some self-harm and extremist content slips through computer filters, and with more time spent on TikTok, a user is more likely to encounter unmoderated content.
This unfiltered material can vary from videos with nudity to ones that advocate for self-harm behaviors like cutting, suicide, bulimia and other eating disorder behaviors. Research shows people who spend increased time on social media are more at risk for negative mental health impacts such as negative body image, eating disorders and depression.
Research linking increased time spent by adolescents on social media sites such as Facebook and Instagram to body dissatisfaction and eating disorders in adolescents and young women has made people question how safe young people are on this poorly monitored platform.
In response to these concerns, TikTok has launched a mental health support guide and eating disorder resources on its app in response to concerns about self-harm, disordered eating and body dysmorphia which might be triggered by TikTok content.
A substantial proportion of TikTokers are young and vulnerable — 33% of users in the United States are 19 years of age or younger. Research also shows TikTok-use is associated with neuroticism.
This research begs the question: Do people arrive to TikTok neurotic or is their neuroticism a result of exposure to triggering content on the platform? The Wall Street Journal’s bot accounts did not start off with interests in self-harm or political conspiracy theories, but within a few hours of use, received videos with those types of themes.
If users do decide to remain on TikTok, they should use the option on the FYP where a person can put a hold on disturbing videos by selecting “Not Interested.” This option gives users a way to curate their feed and have more say in what content they continue to see.
It is worth critically examining what is landing in your “For You” recommendations and considering that some content was never intended to be mentally healthy for you. Considering the dangers of the streaming of extreme content, users should consider getting off the app entirely.
_________________
Follow the Graphic on Twitter: @PeppGraphic
Email Joshua Evans: josh.evans@pepperdine.edu