top of page

Rabbit Holes and Echo Chambers. Part 4: Experimenting with Tik Tok

In the first experiment, we put YouTube’s rabbit holes under scrutiny, and despaired watching how easy it is to fall into anti-vaxxer disinformation. We also provided an overview of how social media work in general, which we recommend to check out. In the second and third experiments, we tested Facebook’s and Instagram’s propensity to allow disinformation and conspiracy theories about space go rampant. In this final part, our test subject will be TikTok.


A different threat from TikTok


Some preliminary notes

TikTok is a much newer social media, and has had a history that is best described as “troubled”, tosaythe least.


It has also been accused of being a major channel for disinformation online – and given its booming and young users base, this makes it an arguably much more serious threat than things like Facebook.

It should be noted, however, that it is especially difficult to pin down how TikTok works, exactly: Because it is a comparatively younger social media, its algorithm is still evolving; it is not difficult to find speculations online that the algorithm changed very recently or is likely to change again. Differences are also known to exist between algorithms in different countries.


With this in mind, it is important to note that our experiment took place in Summer 2022, and on the basis of the algorithm the social media uses for European countries.


The results were somewhat surprising.


A controlled content-driven engagement

Just like YouTube and Instagram, TikTok’s engagement is mainly content-driven rather than community-driven. Content creators post short videos, and users watch them, scroll past, like them, or comment.


On a positive note, it should be immediately stated that TikTok stopped blocking disinformation hashtag at some point in 2022: Hashtags such as “QAnon”, “9/11insidejob” or “Moonlandinghoax” cannot even be found – at least at the time of the experiment.


Engagement is therefore driven by content, and there does not appear to be a deliberate attempt to peddle deceitful one.


So, all good?


A piloted experience

Not really.


In our experiment, we tried to simulate the same “rabbit hole”, “echo chamber”, or “engagement through novelty” we had identified in the case of YouTube, Facebook, and Instagram, respectively.


Instead, we noticed that it is in fact extremely difficult to veer away from what TikTok decided one must see. Of all the experiments, the TikTok one was by far the longest, taking several hours. Despite the long time spent browsing between generally rather silly videos (and some painfully stupid ones, too), trying to see if watching the same kind of videos produced a “rabbit hole” effect, the same result kept repeating: The “For you” section that recommends new content to users was always filled with what appeared to be a pre-determined mix of content: funny and crude videos, life hacks, beauty videos, pet videos, product promotions and celebrities stuff.


Despite spending long browsing sessions watching and interacting with the same kind of videos, TikTok still decided that there were other categories of videos that we needed to see. Every time we “fed” our preferences to the algorithm, our preferences were barely taken into account.


Even if we spent time watching pro-Kremlin propaganda, for example, TikTok then gave us suggestions that were surprisingly mixed: pro-Russian, sure, but also pro-Ukrainian videos.


The only way to have our preferences taken into account was to let the same video play over and over for 20, 30 minutes. This is not really how people use TikTok though: There are only so many times one can watch a video of Putin walking through a door, after all.


One of the weirdest things we noticed during our browsing was that TikTok really wants you to watch gross videos. We stumbled across a single disgusting video of parasite removal, and after that – and very much against our desires and tastes – we were regularly suggested videos featuring pimples popping and other disgusting things.


TikTok does not really work through “echo chambers” like other social media: Instead, it seems to give you more of a “curated experience”.


Some may even call it a piloted one.


What this implies for disinformation

One the one hand, this is good: It is better to be peddled pimple popping videos than Kremlin disinformation.


On the other hand, the fact that the TikTok experience appears to be so heavily piloted (at least in Europe, and at least with the algorithm in place at that time) can make for a rather sinister realization.


Users may start using TikTok assuming that what they are watching is suggested to them based on their preferences and on popularity. Yet this would be wrong. When using TikTok, users should keep in mind that they may have even less control on what they see than when using other social media. Someone else is deciding what they see entirely.


If the browsing experience on TikTok really is “piloted”, then all it takes is a change in the algorithm decided by the company, and disinformation may become part of users’ TikTok “menu” without them even realizing it.


Conclusions

None of the social media we experimented with gave us a sense of resilience from disinformation: Deceitful messages are either openly promoted, unchecked, or always “behind the corner”. Some social media are downright terrifying in how shamelessly they allow users to fall down rabbit holes and get trapped into echo chambers.


With all this in mind, we feel it is important to suggest to always watch out on social media: keeping users engaged is the goal; this is what generates profit for the companies, after all. They may do it by feeding the same stuff over and over again, or by guiding users to shape their preferences. In all cases, users should always remain sceptical about what they see.


As to social media companies, a lot more is definitely needed to keep their environments safe from disinformation. Assuming they even care about this.


bottom of page