Digital propaganda: Computational & algorithmic bullshit

25 Examples and Case-Studies in Digital Propaganda

The Willy Wonka Experience

Digital Propaganda can be used to create exaggerated representations of events or products. It is designed to deliberately manipulate public opinion (Neyazi, 2020). An example of this is a recent dumpster fire; The Willy Wonka experience in Glasgow Scotland.

This event was advertised to the public with the use of A.I photos that made the event seem more inviting than it actually was. This demonstrates the dangers of A.I and how digital propaganda can be used to persuade an audience. Although these images are obviously fake to most people, a lot of families believed them and were disappointed upon arrival.

“Beyond serving as a source of entertainment for everyone but the children who attended, the fiasco raises new questions about comparable scams, and how AI can be used to advance them” (Zhou, 2024).

This use of A.I shows how digital propaganda contributes to a climate of misinformation and distrust in media and advertising. People are being exposed to more and more misleading or exaggerated content which can undermine their ability to distinguish what is true and what is not true. This leads to a decline in critical thinking skills and societal trust.

The connection between digital propaganda and this Willy Wonka event is the aspect of manipulation, This event manipulated the expectations and perceptions of people through carefully crafted A.I content which led to a disappointing reality.

References

Neyazi, T. A. (2020). Digital propaganda, political bots and polarized politics in India. Asian Journal of Communication, 30(1), 39–57. https://doi.org/10.1080/01292986.2019.1699938

Zhou, Li. (2024). The less-than-magical Willy Wonka event, briefly explained: The viral fiasco in Scotland that made kids cry — and prompted calls to police. Vox. https://www.vox.com/technology/2024/2/28/24086217/willy-wonka-glasgow-scotland

Example 2

Fyre Festival As Digital Propaganda:

In April 2017, the world witnessed the catastrophic unraveling of the Fyre Festival, a highly publicized music festival advertised as a luxury experience on a private island in the Bahamas. Promoted by social media influencers and marketed as the epitome of extravagance, the event quickly devolved into chaos, leaving attendees stranded without adequate food, shelter, or even the promised musical performers. While the failure of the Fyre Festival can be attributed to numerous factors, it serves as a very strong example of the power and of digital propaganda.

Central to the promotion of the Fyre Festival was the use of social media influencers, who were paid huge sums to endorse the event to their millions of followers. Through carefully curated posts featuring sun-kissed beaches, celebrity endorsements, and promises of luxury, these influencers created a compelling narrative that captured the imagination of many wealthy individuals. Doing this, they leveraged the principles of digital propaganda to construct a false reality that hid the massive shortcomings and financial mismanagement behind the scenes.

By showcasing images of models on pristine beaches and interacting with A-list celebrities, organizers created an illusion of exclusivity that appealed to the desires of their target audience. This carefully crafted image served as a form of digital propaganda, manipulating perceptions and emotions to generate buzz and drive ticket sales.

As the festival date approached, reports of inadequate accommodations, subpar catering, and canceled performances surfaced on social media, undermining the constructed narrative of luxury. Despite attempts to control the narrative through digital channels, including deleting negative comments and manipulating images, the reality of the event was shown.

In the aftermath of the Fyre Festival, the power of digital propaganda to shape perceptions and manipulate behavior has become abundantly clear. However, it also serves as a strong example about the dangers of prioritizing image over experience and the consequences of deceiving an interconnected audience. As technology continues to evolve, our understanding of its potential for both positive and negative influence also must move forward.

References:

Jr., T. H. (2019, August 22). Fyre Festival: How a 25-year-old scammed investors out of $26 million. CNBC. https://www.cnbc.com/2019/08/18/how-fyre-festivals-organizer-scammed-investors-out-of-26-million.html

Political Bots and Digital Propaganda

Political bots are becoming increasingly popular as technologies advance, posing threats to internet users as digital propaganda continues to thrive. To explore this topic further, one programmer is developing a “Right-Wing alter ego” for the popular AI service ChatGPT (Knight, 2023). Elon Musk claims that the original service contains a “woke” bias which could be fixed in a new model of the software (Knight, 2023). The programmer put these claims to the test, comparing the regular ChatGPT to the newly created “RightWingGPT,” which is said to contain a bias towards right-wing politics. Additionally, “DepolarizingGPT” was created and features content from stakeholders from both the right and left sides of the political spectrum. After a few tests, it was quickly found that the politically driven AI models contain many forms of misinformation and hold propagandistic values. RightWingGPT will state that Donald Trump had won the US presidential election, or that climate change is not something that can be predicted. It is important to remember that all of this misinformation is formed by these AI models through information in which it discovers online. This study strengthens the idea that the internet is home to a lot of propagandistic content, especially in a political context.

Dubois & Mckelvey (2019) delve into the impacts of political bots on the Canadian democratic process. There is a growing prevalence of online political bots and automated social media accounts which are programmed to amplify specific messages which have significant effects on public opinion, especially during Canadian election periods. These political bots can be used to manipulate or perpetuate online discourse through their emphasis of certain narratives, creating the illusion of a widespread public support for particular issues, political parties, or candidates. Bots make it appear that there are more (real) people who agree with certain ideologies, making it more likely for individuals to agree with these notions. Dubois & Mckelvey (2019) explain how there are many challenges with identifying and regulating political bots, especially with the lack of transparency amongst social media platforms and their stakeholders. There is certainly a greater need for accountability amongst these platforms and the individuals creating these political bots (and propagandistic content in general) to ensure the integrity of the democratic process.

[RL]

References

Dubois, E., & McKelvey, F. (2019). Political Bots: Disrupting Canada’s Democracy. Canadian

Journal of Communication, 44(2), PP27–PP33. https://doi.org/10.22230/cjc.2019v42n2a3511

Knight, W. (2023, April 27). Meet CHATGPT’s right-wing alter ego. Wired.

https://www.wired.com/story/fast-forward-meet-chatgpts-right-wing-alter-ego/

License

A field guide to Bullshit (Studying the language of public manipulation) Copyright © by Derek Foster. All Rights Reserved.

Share This Book

Feedback/Errata

Comments are closed.