top of page

AI-powered Propaganda Operation “Overload”: Debunk.org’s Case (Part 2)

We have analysed the content of 339 emails we received as one of the organisations targeted in Operation “Overload.” During this operation, threat actors tied to Russia attempted to overwhelm independent organisations with disinformation content, diverting attention from other important events. The content analysis showed that over 80 per cent of the visual materials were disinformative, and over half were altered or AI-generated. Our latest report reveals what we found through MAAM’s AI forensic tools.


Debunk.org is one of the organisations that was the target of the disinformation campaign Operation “Overload”. Organisation Check First and its partners discovered this operation, targeting over 800 organisations, aimed to overwhelm independent organisations with made-up, non-existing disinformative content to scatter their limited resources. Our analysis of 339 emails that were received by our organisation alone,  all on dates coinciding with major geopolitical events and public holidays. Most of these emails were sent from Gmail accounts with duplicate usernames. They contained links to Telegram channels and focused on topics such as disinformation, political allegations, and social issues.


 In this research, we focused on the content and origins of the visuals included in those emails. Our findings reveal that most of the content shared in Debunk’s case of Operation Overload was either fully or partially fabricated. Over 80% of the media shared was classified as disinformative.Debunk.org partnered with the MeVer group of CERTH, which granted access to MAAM, a platform in development as part of the AI-CODE project, which aims to help investigators analyse, categorise, and test versatile media formats. Specifically, it seeks to provide a multi-use media analysis tool, which would be able to run AI detection analyses and forensics on different forms of media. After extracting images from the emails received by Debunk.org, it was possible to test their authenticity with the help of MAAM. 


The imagery that was attached can roughly be divided into three categories: 1. Screenshots of videos, taken out of context, 2. Low-quality imagery overlaid with text or headlines, or 3. Scans of newspaper articles, some of which were AI-generated or altered, while others showed no signs of editing based on MAAM's analysis. Regardless of origin, newspaper front pages and articles appeared frequently in the dataset.


The text overlaid on the images or screenshots of videos often carried semantically charged messages, which spread long-known Kremlin disinformation narratives. These messages focused on discrediting Ukrainian refugees, Ukrainian fighters, American democratic party officials and the French government. A frequent observation has also been images of graffiti artwork, or visibly edited images of politicians (with a focus on JD Vance).


The total pool of 106 images has been collected and uploaded to the MAAM MeVer platform. The tool can analyse and label images uploaded to the platform according to its methodology. After consulting the automatic and requested labelling of images following the analyses, we have developed a methodology for categorising images into distinct sections. The subsequent classifications that we have arrived at are three overarching categories: Manipulated, Overlaid Text, and Other


The majority of the content shared in Debunk’s case of Operation “Overload” has been either fully or partially fabricated. Around 83% of all media shared has been deemed manipulated to disinform. Manipulated imagery has been at the forefront of the operation, accounting for more than half of the received and analysed content. With over half of the total dataset, the largest subcategory has been AI-generated images. The AI-generated images category has also been the predominant category within the entire dataset, as, with the help of MAAM MeVer tools, 34% of all content has been deemed fully or partially AI-generated.


The second-largest category has been the Overlaid Text category, serving as the main driver of semantically aggressive messaging. The category has promoted several deceptive narratives that align with the Kremlin’s disinformation. Specifically, content has oftentimes been overly critical of the Ukrainian, French, and U.S. authorities, Ukrainian refugees and civilians, as well as the Ukrainian army. A number of these posts also seemed to possess parallels to the Russian Doppelganger campaign, as the images were forged to appear as though they were part of reporting material shared by some of the world’s leading news agencies and newspapers. Forging included imitating the format of visual reporting, theft of logotypes and no traces of said stories on the pages of reporting bodies, which were being imitated.


The Other category revolved around images that have not been manipulated via editing or AI generation. Whilst the information relayed may still be misleading, images that fell into this category did not portray strong malicious narratives.


MANIPULATED IMAGES


Manipulated images can be divided into three subcategories, specifically AI-generated, Likely AI-generated and Edited. The AI-generated category includes images that have been conclusively identified as manipulated or fully generated by an AI with the help of MAAM MeVer analysis. 

AI-Generated Images

Within MeVer’s MAAM tools, an existing section is AI Generation analysis, which would run several algorithms to establish the likelihood of alteration or creation of an image with generative AI. When the probability of the algorithm is judged, an image would be marked with an “AI Generated” label. There has been an extensive number of images from our data pool, which have been labelled as AI-generated. Some examples of the AI-generated images are seen below, denoting a mocking graffiti artwork of President Macron, a generated image of JD Vance as a DJ at a party, or an anti-Ukraine graffiti saying “STOP UKRAINE” at the side of a building.

The threat actor has been using AI-generated images as part of their impersonation tactics, attempting to pose as credible sources and manipulate information, which involves multiple violations, including brand theft. Some examples of impersonated brands are an open source investigation organisation Bellingcat, Reuters, one of the leading global news agencies, or MSNBC News, an American cable news channel. These tactics also seem to mirror the efforts of the Doppelganger campaign, which has been gaining traction since 2022.


Likely AI Generated images

This category of images has been derived from the same analysis of AI-generated content described in the previous subsection; within MAAM’s tools, the likelihood of AI generation is denoted in percentages in accordance with separate detection algorithms. In certain cases, an image will receive a label of “Uncertain”. Likely AI-generated denotes examples of images that were judged to be inconclusive or “Uncertain”, yet had a high percentile probability of being AI-generated. As the likelihood of AI manipulation is denoted in percentages, a threshold of 55% and over has been established to add images into this category, following Debunk.org’s methodology for this research. Images, which were ranked above this threshold were counted towards this category. One such example is the following image of graffiti, where the threat actor impersonates The Welt, which is a reputable German daily newspaper. The depicted graffiti artwork says “Jude Denk Daran Jedem das seine”, translated as “Jews remember that to each his own”. The German phrase "Jedem das Seine" is the direct translation of the Latin expression suum cuique, which means "to each his own" or "to each what he deserves." During World War II, the Nazis used this phrase with disdain as a motto displayed at the entrance of the Buchenwald concentration camp. As a result, the phrase is now viewed as controversial in contemporary Germany. The image has been flagged as Uncertain in the process of AI detection analysis, with the probability of AI altering identified as high as 64%, placing this image in the Likely AI-Generated category. No records of this graffiti have been found in Welt’s reporting, suggesting strong evidence towards impersonation tactics by the threat actor.


Another example refers to the image of another graffiti artwork, depicting a fly swatter (representing the United States) decapitating Volodymyr Zelensky. The graffiti also mentions January 2025, aligning with the inauguration of President Donald Trump into office. The message of the artwork supposedly hints at a radical change in foreign policy regarding the Russia-Ukraine war. The image has been judged at 68%, Likely AI Generated. Further research proves that the alleged author of the graffiti has never worked on this artwork, as it has never existed according to a Ukrainian fact-checking organization Detector Media. 

Edited Images

The edited category describes images that have been manually altered, and have been identified with the help of the image forensics feature inbuilt by MAAM MeVer. MAAM MeVer offers a tool that can identify altered regions in images, making it useful for detecting edited pictures. This tool highlights the parts of the image that have been modified, allowing us to uncover both the edits themselves and clues about the original content before manipulation. Examples of such manipulations have been found in the following images. The first is an image of a podcaster, seemingly wearing an emblem of the Ukrainian coat of arms, which has been crossed out. When looking at the forensic analysis of this image, it becomes clear that the emblem was never there in the original image of this man.

A similar alteration has been made in the following image, with a depicted graffiti of the Ukrainian coat of arms, dubbed as “UKROTTEN”. The graffiti is alleged to be a still from a fake trailer of a non-existent Netflix documentary, which does not exist. The comment added below directly translates from Russian as “Graffiti - like a punch in the teeth”, showing a clear anti-Ukraine sentiment aligning with that of the pro-Russian rhetoric spread by the Kremlin. Whilst the AI detector was unable to detect AI, with the levels of likelihood being below 50%, forensic analysis has revealed that this graffiti has been digitally added onto the original photograph of this building.



Another peculiar example of such editing is another two images of JD Vance exploring his love for music. The images are visibly edited, however, no AI usage has been detected. Forensic analysis once again shows the clear altering of the image.


Overlaid Text Images

One of the functionalities of MAAM is the automatic labelling of images with superficially overlaid text with the demarcation of “meme”. Within our methodology, such images have been assigned to the Overlaid text category. This category denotes images that seem to have genuine imagery, yet have been altered with a headline or overlaying text, relaying a semantically charged message. These images have also all been flagged as “meme” by the automatic labelling within the MAAM tool. The messages are often highly emotional and polarising, making unfounded claims and oftentimes do not have an evident connection to the image it has been laid over.


Usually revolving around Ukrainian soldiers, government, or refugees, the overlaid text aims to discredit these groups. This is done by using tactics of polarization and forging of unfounded claims against the groups. Most of the claims have emotionally charged messages within them, clear and simple to general audiences, which makes it easy for the susceptible audience with low levels of media literacy to polarise and form misinformed beliefs surrounding these groups.


For example, the following image shows a man with a blurred face insinuating that this is an image of a Ukrainian man, who lived with his mother’s corpse for more than a year in order to receive governmental subsidies in Poland. Whilst this story indeed happened, in reality, the story concerned an Austrian man who kept the body of his mother in his apartment for over a year, in order to receive her pension. The first story on this incident was published in the Guardian on the 9th September of 2021, before the full-scale invasion of Ukraine began.


No traces of proof towards other claims presented in the images were found in the investigation process, leading to a conclusion that the stories were fully or partially fabricated. 


CONCLUSION

As per the example of Debunk.org’s Operation “Overload” case, it becomes apparent that AI-generated content is being routinely used in the spread of disinformation content, aiding in the simplicity of operational logistics in running disinformation campaigns. The widespread usage of AI-generated imagery can be connected to the accessibility of new tools to non-experts. With text-to-image generative models, users can create or modify images simply by providing a textual description. Before the advent of text-to-image generative models, people relied on digital photo editing tools to create manipulating content. Whilst simple manipulations were easy to perform, they were limited in scope. However, more sophisticated and high-quality manipulations would require much more time. Text-to-image generative models not only make generation much faster, ranging from seconds to minutes per image, but they also provide much more flexibility. Due to the widespread availability, ease of usage, and efficiency, the number of AI-generated images on the web has exploded since 2022. Statistics from 2023 estimated that genAI services generated more than 15 billion images. AI-generated image detection, without appropriate tool support, is also much more time-intensive as it requires either consultation with an expert or careful inspection with cross-checking of provenance. Threat actors are also known to be persistent even once the operation is disclosed, such as in the case of Operation “Overload”.


Nevertheless, responders, such as MeVer’s MAAM tools, are catching up in developing solutions to identify altered content. Given the widespread use of Image AI generation and its role in facilitating malicious disinformation campaigns, it is crucial to continue supporting and amplifying tools developed to combat them. The ability to effectively detect the usage of AI technology has become the centrepiece in combatting and responding to disinformation campaigns or threat actors.

For more information about the AI-CODE Project, click here. The AI-CODE project is funded by the HORIZON Europe Programme of the European union. 
For more information about the AI-CODE Project, click here. The AI-CODE project is funded by the HORIZON Europe Programme of the European union. 





bottom of page