TikTok spokesman Alex Haurek said the change was made last month, adding that the company is “continually evolving the TikTok platform and displaying hashtag metrics by number of posts brings us in line with industry standards.” He noted that academic researchers have other ways to study TikTok content.
Company officials have said the Israel-related viewership numbers were used in misleading ways. But they also used the data themselves to argue that the company’s recommendation algorithm doesn’t “take sides” and that other platforms show a similar viewership gap.
Israel-Gaza war sparks debate over TikTok’s role in setting public opinion
The Center for Countering Digital Hate, an advocacy group that first noted the change Wednesday, called it “a step back for transparency” that “makes it harder to understand the scale of potential harms.”
The group had used the feature over the years to study the spread of videos promoting antisemitism, eating disorders and other harmful content. In a report last year, the group said TikTok videos promoting steroids and similar drugs had been seen more than 589 million times.
The change is likely to fuel further criticism over the company’s level of transparency into how one of the world’s most popular apps runs. At a Senate Judiciary Committee hearing last week, TikTok’s chief executive Shou Zi Chew was grilled over whether its ownership by the Chinese tech giant ByteDance had affected the content it shared with global audiences. Chew has said repeatedly that the company is not influenced by the Chinese government.
Acknowledgment of the change comes one month after TikTok limited another tool, Creative Center, that researchers had used to examine video-viewership differences during the Gaza war. That tool no longer provides information on hashtags related to the war or other political issues.
The data had been used by critics to argue that TikTok had chosen to offer a lopsided view of the conflict to advance Chinese policy goals, which the company vigorously disputes. Haurek said the tool was built for advertisers and had been misused to “draw inaccurate conclusions,” adding that the change would “ensure it is used for its intended purpose.”
TikTok’s hashtag data is an imperfect measure for assessing user behavior: Many videos aren’t given a hashtag, and some creators add them to videos merely to criticize what they depict. The total view count for a hashtag, the information TikTok now hides, is similarly imprecise, because it offers no indication of whether a video has been heavily promoted by TikTok’s algorithm or is popular for reasons of its own.
But those data points offered some of the only clues that social media researchers can use to evaluate TikTok’s algorithm, which promotes content in an opaque way to a user base that now includes 170 million accounts across the United States. Researchers have for years pushed TikTok and other platforms to share more data on what kinds of content is promoted or suppressed.
TikTok has worked to address those concerns by opening a “transparency center” in Los Angeles where journalists and policymakers have been given tours to see how the platform works. Another center is scheduled to open soon in Washington.
The company also shares some public data related to video performance and search results via a system called Research API. Access to the system, however, is limited to U.S.-based academic researchers who must apply and be chosen by TikTok.
TikTok’s rivals have adopted similar measures to undercut independent research.
X, formerly Twitter, this year told academics they needed to delete the data they’d collected from a free, years-old partnership granting them access to a massive stream of tweets known as the “firehose.” In 2020, Facebook cut off a New York University research group that tracked political-ad targeting on the platform, saying it compromised people’s privacy. And in 2022, Meta, which owns Facebook, shut down a tool called CrowdTangle that journalists and researchers had used to show which posts were most popular — often with embarrassing, odd or politically skewed results — after executives said the tool was unrepresentative of normal use.
Joel Finkelstein, a researcher at the Network Contagion Research Institute at Rutgers University, said TikTok’s hashtag-view change suggested the company was similarly eager to defang researchers who wanted to uncover the platform’s problems.
Finkelstein used the hashtag data in a December report to argue that video topics deemed subversive by Chinese censors, such as the Hong Kong protests, were relatively underdiscussed on TikTok compared with other platforms. (Asked about that study last week, Chew cited a report by a libertarian think tank, the Cato Institute, that said its methodology was flawed.)
The hashtag-view change is “part and parcel of what appears to be a set strategy for eliminating transparency” inside the company, Finkelstein said. “The more problems there are, the tighter the curtain gets closed.”