In the first experiment , we looked at the way YouTube allowed the creation of rabbit holes into anti-vaxxer disinformation. We also provided an overview of how social media work in general, which we recommend checking out. In the second part, we tested Facebook’s resilience – or lack thereof – to disinformation and conspiracy theories about space. In this third experiment, we are going to look into Facebook’s slightly cooler cousin: Instagram.
Instagram’s Anti-Semitic stories
How Instagram’s engagement works
Instagram may not look like the most obvious place to find disinformation. Unless you count influencers failing to disclose that they are paid to promote stuff as disinformation.
And yet, in many ways, our experiment with the reels- and pictures-based platform gave us even more discomforting results.
Instagram is, of course, part of the same corporation as Facebook, but works in a very different way, arguably more similar to YouTube: engagement is not horizontal; content creators throw material out there for their audience; communities build only in the comment sections, but these remain secondary to the content itself. Content creators remain in control, especially when they post stories that people react to: While the content creator gets their message out to audiences, your reaction is a private exchange between the two of you.
Thus, just as with YouTube, it is content that drives engagement, which is mediated heavily by users’ vertical interaction with that content (likes, saving of posts, visualization of stories and reactions to them).
The dark side of an apparently “light” media
For this experiment, we created two separate Instagram accounts, and we tried to find an equal amount of legitimate and conspiracy material about space-related topics as we did for the Facebook experiment.
With one Instagram account, we followed the two groups of pages trying to keep it mixed: We followed one legitimate page and one disinformation page, then again, one legitimate page, one disinformation page, and so on.
With the other Instagram account, on the other hand, we followed the two groups of pages one after the other: First we followed all the legitimate pages, and only then we started following the disinformation ones.
The results were, in fact, rather shocking.
While the YouTube experiment lasted around 60 minutes before our browsing experience become dominated by disinformation and sensationalism, and the Facebook experiment took 30 minutes, the Instagram experiment gave us serious reasons for concern after just eight minutes.
The fast path to echo chambers
Keep in mind: in the first part of the experiment, we had tried to keep our “information menu” balanced, liking both legitimate and illegitimate pages about space, alternating between the two groups.
Yet, after just eight minutes of liking and interacting with content creators, when we returned to our general feed, we were fed six posts from legitimate pages, versus 10 posts from conspiracy-related pages, eight of which belonged to the same page called planemaa, which quickly became overrepresented on our feed.
It was already possible to see a tendency to make recommendations skewed towards conspiracy theories.
The danger of Instagram stories
And not just any conspiracy theory.
The page to which Instagram’s algorithm had inexplicably started to give such a disproportionate visibility quickly turned out to be a receptacle of antisemitic and homophobic content in their stories.
Stories are harder to check for social media themselves (assuming they even want to check them) and for fact-checkers, as they are temporary and fast; thus, it is not surprising that disinformation pages will sometimes reserve some of their worst content for stories. In the case of planemaa, we were treated to explanations of how science and Judaism both advance occultism or how Jews conspire to invent discrediting labels such as “Nazi” to attack their enemies, and we also got to “enjoy” anti-Semitic and homophobic caricatures straight out of a 1930s pamphlet.
Keep in mind that the preference for conspiracy theory related pages does not come from the fact that they have more followers or are more popular. The opposite, in fact, is true!
But things got even worse when we run the second part of the experiment.
Engagement through novelty
When we run the experiment by following all the legitimate pages in one go, and then following the disinformation pages, the disinformation rabbit hole started instantly.
After liking legitimate pages in the second part of the experiment with a new account, we were initially given recommendations for similar content. As soon as we liked a batch of disinformation pages, however, virtually all the top recommendations in our main page had been taken over by deceitful material. The first 10 post all belonged to disinformation pages; the eleventh was from a legitimate page (NASA), followed again by two pieces of disinformation content.
Every time some legitimate material sneaks into our feed while scrolling further, it is immediately counter balanced by more disinformation material.
The fact that we had liked a good balance of legitimate and illegitimate material mattered very little, just as the fact that legitimate pages had more followers or published high-quality content. Instead, the most recent interaction was what drove engagement.
Engagement through novelty is the mechanism that Instagram’s algorithm appears to follow: If the social media sees that you expressed interest and were engaged on a certain topic, it will feed you more and more of that topic to keep you from dis-engaging with the platform.
If the latest thing you happened to stumble across was disinformation, your Instagram “menu” will be quickly taken over.
In the final experiment, we will have a look at TikTok. The results there are still creepy, but in a different way.