A lawsuit filed this week on behalf of the state of Indiana alleges that the video-sharing social media app TikTok, and its China-based parent company ByteDance, engage in deceptive practices to addict children and teens to the platform. The opening line of the complaint alleges that "TikTok Inc. is a Chinese Trojan Horse unleashed on unsuspecting American consumers who have been misled by the company's false representations about the content on its platform."
But despite the seriousness of the accusations, there is little evidence to back up the state's claims.
The lawsuit charges that the company lied in order to secure a more favorable age rating in Apple's App Store. TikTok currently has a 12+ rating in the App Store, indicating that it may be unsuitable for children under 12 years old but OK for anyone else. The suit says the app should have a more restrictive 17+ rating.
To justify that request, it specifically invokes TikTok's algorithm, which curates videos to show users. The lawsuit alleges that the algorithm "promotes a variety of inappropriate content to 13-17-year-old users throughout the United States" and that it "serves up abundant content depicting alcohol, tobacco, and drugs; sexual content, nudity, and suggestive themes; and intense profanity. TikTok promotes this content regardless of a user's age, which means that it is available to users registered with ages as young as 13."
To demonstrate the real-world impact exposure to inappropriate content can have, the suit cites a case in which an Indiana school superintendent blamed a rash of school vandalism and petty theft on the "devious licks" TikTok trend. "Obviously, our kids are influenced by social media and TikTok," Park Grinder, the Southwest Allen County Schools superintendent, told a Fort Wayne TV station.
But as Reason's Liz Wolfe pointed out last year, TikTok took down the "devious licks" hashtag, and there is no indication of how many of the videos constituted actual theft or vandalism or how many were faked for clout. Similarly, the lawsuit lists keywords and euphemisms that can be used to search for sexual content on the platform, workarounds which are only necessary because searches for explicit terms are not allowed.
Singling out the algorithm is similarly off-base. The lawsuit claims that "many children are exposed to non-stop offerings of inappropriate content that TikTok's algorithm force-feeds to them." Recommending explicit videos to a user watching kid-friendly content would certainly be an odd way to keep customers. If anything, it would behoove a platform like TikTok to keep explicit content away from anybody except those who intentionally seek it out.
The suit misunderstands how algorithms work: While platforms do engineer their algorithms to incentivize users to keep engaging, the way they achieve it is by selecting content that is similar to what a user is already interacting with. The only way it would prioritize inappropriate content is if the user had already watched videos like that. If anything, algorithms cut through the noise and deprioritize content not related to whatever the user has not interacted with or has actively avoided.
Indiana's claims of fraud seem aggressive, but it's likely that anything more moderate would have failed: In October, a judge threw out a lawsuit against TikTok relating to content on the platform. The judge cited Section 230, the law that largely protects platforms from liability for user-generated content.
Ultimately that's exactly what is at issue here: Some users generate content, other users view that content, and algorithms try to keep each of them engaged. The idea of algorithms intentionally funneling inappropriate content to unsuspecting users not only defies logic but is antithetical to how a social media platform keeps users.