Disinformation: The language of (intentional) manipulation

13 What is Disinformation?

Introduction

The links between misinformation and disinformation are obvious if one views both, simply, as deviations from truthful information.

Misinformation - People gossip vs Disinformation - Man spread latest news[1]

The power of the prefix is instructive. Mis-information is tied to the latin root “mis-” meaning wrong (as in mistaken, or misguided or misapprehension). Dis-, on the other hand, connotes “away” or “apart” — it sums up an “anti-” attitude, reversing the meaning of that which follows (think distasteful, disability, disbelief, discontent, disown, disloyal, etc). However, the distinctions aren’t always quite so clear cut: “When people share disinformation with others not realizing it is false, it turns into misinformation. For this reason, disinformation is often regarded as the subset of misinformation” (Lee & Jia, 2023, p. 31). “On the other hand, misinformation can also be transformed into disinformation. For example, a piece of satire news may be intentionally distributed out of the context to mislead consumers” (Chu et al., 2020, p. 3). Whichever direction it travels, “disinformation is the species of misinformation that is intended to mislead people” (Fallis, 2016, p. 333).

Definitions

Disinformation can be seen as “falsehoods that are spread deliberately” (Bergstrom & West, 2020, p. 29). But perhaps this is a little too broad. Defining it as “dishonest, deliberately deceptive or manipulative communication” (Hameleers & Manifold, 2022, p. 1177) is a little more precise.

But Macnamara goes into even more detail and provides two definitions of disinformation: Citing Derakhshan and Wardle, it is cast as ‘information that is false and deliberately created to harm a person, social group, organisation or country’. And the UK government defines it as ‘the deliberate creation and dissemination of false and/or manipulated information that is intended to deceive and mislead audiences, either for the purposes of causing harm, or for political, personal or financial gain’ (2021, p. 38). The common theme is not just intent, but also harm. This emphasis is underscored in the definition provided by the United Nations Educational, Scientific and Cultural Organization (UNESCO) — “false or misleading content that can cause specific harm, irrespective of motivations, awareness or behaviours” (Guterres, 2023, p. 5). Disinformation “includes fabricated or manipulated audio/visual content and intentionally created conspiracy theories/rumors. The major motivations behind creating or spreading disinformation are political influence (either foreign or domestic), financial gain, and psychological or social reasons (e.g., to cultivate political cynicism or distrust among people)” (Lee & Jia, 2023, pp. 30-31). Clearly, disinformation is a more serious threat than misinformation (even if it is sometimes difficult to distinguish between them). The United Nations Secretary-General reinforces the gravity of this: “The danger cannot be overstated. Social media-enabled hate speech and disinformation can lead to violence and death” (Guterres, 2023, p. 3).

History

once upon a time …

Varieties

Disinfodemic

“If an overabundance of information can be considered an ‘infodemic,’ then we can also identify an associated subcategory: a ‘disinfodemic’ or a pandemic of nonverified or misleading information” (Canela et al., 2023, p. 113). More than just a lack of verifiable, reliable information amongst a surplus of superfluous information, a disinfodemic “focuses on the potential harmful consequences of mis- and disinformation, as well as the specific challenges associated with an information landscape polluted with false and misleading content” (Canela et al., 2023, p. 114).

Digital Propaganda

While we have a stand-alone unit on “digital propaganda” later on, it is useful to clarify how it is, in fact, a form of disinformation. Robins-Early detailed how misinformation (and questions of foreign influence and fake news sites) occupied the 2016 United States presidential election and conspiracy theories about stealing the election dominated discussion of the 2020 election, but four years later the focus has shifted yet again. This time, disinformation is being reimagined because of the influence of artificial intelligence: “Experts warn that advances in AI have the potential to take the disinformation tactics of the past and breathe new life into them. AI-generated disinformation not only threatens to deceive audiences, but also erode an already embattled information ecosystem by flooding it with inaccuracies and deceptions” (2023). In fact, “there is increasing evidence that AI has been, and will be used to generate realistic deepfakes, create and spread disinformation, and target voters with messages that reinforce harmful, or untruthful messages to a level not seen before” (Kotecha & Simperl, 2024). A 2024 World Economic Forum Survey actually identified misinformation and disinformation from AI as the top global risk over the next two years, ranking it above climate change and war:

[2]

These alarmist narrative warnings (see Altay & Acerbi, 2023) about the oncoming threat of increasingly sophisticated digital manipulation of information able to deceive voters indicates “a sort of democratization and acceleration of propaganda right at a time when several countries enter major election years” (Robins-Early, 2023). Indeed, the term ‘digital propaganda’ is widely understood by scholars as lacking conceptual clarity. While some attempt to define the term, many scholars opt to highlight the lack of understanding and research surrounding the term instead. Because of the complexity of the online digital world, digital propaganda “is often proposed as an umbrella term for a range of affiliated notions and ideas, including ‘post-truth’, ‘fake news’, and disinformation” (Bjola & Papadakis, 2021, p. 189). In the book “The World Information War”, Bjola and Papadakis have a chapter, “Digital Propaganda, Counterpublics, and The Disruption of the Public Sphere”, in which they break down the concept of digital propaganda and attempt to clarify a proper definition of the term as well. The authors looked at both the traditional, information side of propaganda and the more recent and digital side to outline the definition. They used concepts of post truth, fake news and traditional understandings of propaganda on the information side. On the digital side, the authors “look[ed] at the transformative effect of the digital medium on the mechanisms by which disinformation is generated and disseminated” (Bjola & Papdakis, 2021, p. 189). These mechanisms are better understood and defined through Woolley and Howard’s term ‘computational propaganda’, understood as “the use of algorithms, automation and human curation to purposefully manage and distribute misleading information over social media networks (Bjola & Papdakis, 2021, p. 189). Essentially, the digital side is looking at the technical aspects used digitally like bots or algorithms, that help further spread a propagandic agenda. With the information and digital sides understood and outlined, Bjola and Papadakis define digital propaganda as “the use of digital technologies with the intention to deceive the public through the generation and dissemination of verifiable false or misleading information” (2021, p. 190). In other words, digital propaganda is essentially just the use of digital technological aspects, like bots or algorithms, for propaganda, fake news or disinformation sharing/spreading. While still a bit of an umbrella term for disinformation-related concepts, understanding the two sides (informational and computational) provides a better, more in-depth understanding of the concept. Two varieties of digital disinformation are outlined below. [BC- see reference below]

Memetic Disinformation

“Disinformation actors routinely exploit memes to promote false narratives on political scandals and weaponize them to spread propaganda as well as manipulate public opinion” (Ling et al., 2021, p. 2). Interestingly, one study classifies memes as false information but excludes them from disinformation “because they do not satisfy the ‘intent to harm’ condition,” typically  intending, rather, to entertain (Kapantai et al., 2021, p. 1313).[3] Much other research, however, demonstrates that memes’ comedic framing can effectively convey harm. As political scientist Melanee Thomas stated, “You can’t get full information with a meme. … What they want is for people to be disinformed” (in McIntosh, 2019).

Deepfake Disinformation

Deepfakes are a particular kind of digital disinformation. “Deepfakes can be described as visual (e.g. image, video) manipulation using artificial intelligence and deep learning technologies to create fake events (e.g. face swap)” (Lim, 2023, p. 2) Consider the following, now classic, example of a deepfake, meant to alert citizens to the dangers of the technology:

 “Those whose intent is to peddle DeepFakes for particular agenda prey on the ignorance, vulnerability and lack of diligence of the part of unsuspecting and unskilled consumers. By capitalizing on consumers’ media and technological illiteracy, content creators de-educate, exploit and disempower users” (Musa, 2023, p. ix).

Corporate Disinformation

Building on widely understood definitions of disinformation, corporate disinformation has been clarified as “the process of issuing verbal or visual messages with an intent to inform or persuade, including false, inaccurate, imprecise or misleading content, created by the company or on its behalf. The messages may be disseminated by the company itself or by others on its behalf and the disinformation contradicts or distorts the common understanding of verifiable facts affecting the company in order to obtain a benefit. This benefit normally promotes the perception of the company or its reputation, but may also harm competitors” (Olivares-Delgado et al., 2022, p. 542). Note the congruity with strategic bullshit in the emphasis upon obtaining a benefit and the loose relationship with the truth. The particular dysfunction that makes it disinformation, however, is the emphasis upon causing harm. Much of the corporate literature is focused on protecting companies from being the target of disinformation.[4] We should, however, recognize companies as a source of disinformation. Ranging from subtle (or weak) corporate disinformation to more concerted forms of deception, we might recognize eight types of corporate disinformation: (1) Corporate fake news, (2) Greenwashing, (3) Deceitful Advertising, (4) Misleading Omission, (5) Opacity, (6) Infoxication or information overload, (7) Decontextualized Info and Data, and (8) Illegibility and Inaccessibility (Olivares-Delgado et al., 2022, pp. 542-545).[5]

References

Altay, S., & Acerbi, A. (2023). People believe misinformation is a threat because they assume others are gullible. new media & society, 1–22.

Bergstrom, C. T., & West, J. D. (2020). Calling bullshit: The art of skepticism in a data-driven world. Random House.

Bjola, C., & Papadakis, K. (2021). Digital Propaganda, Counterpublics, and the Disruption of the Public Sphere. In Clack, T., Johnson, R. (Ed.), The World Information War. (186-213). Routledge.

Canela, G., Claesson, A., & Pollack, R. (2023). Addressing Mis- and Disinformation on Social Media. In T. D. Purnat et al. (Eds.), Managing Infodemics in the 21st Century (pp. 113-126). Springer. https://doi.org/10.1007/978-3-031-27789-4_9

Chong, H. H., Shah, R., & Kulkarni, K. (2023). The emergence of AI tools in academia: potential and limitations. The Bulletin of the Royal College of Surgeons of England, 105(8), 400-402.

Fallis, D. (2016). Mis- and dis-information. In L. Floridi (Ed.), The Routledge handbook of philosophy of information (pp. 332-346). Routledge.

Guterres, A. (2023). Our Common Agenda Policy Brief 8: Information Integrity on Digital Platforms. United Nations. https://www.un.org/sites/un2.un.org/files/our-common-agenda-policy-brief-information-integrity-en.pdf

Hameleers, M., & Minihold, S. (2022). Constructing Discourses on (Un)truthfulness: Attributions of Reality, Misinformation, and Disinformation by Politicians in a Comparative Social Media Setting. Communication Research, 49(8), 1176–1199. 

Kapantai, E., Christopoulou, A., Berberidis, C., & Peristeras, V. (2021). A systematic literature review on disinformation: Toward a unified taxonomical framework. New media & society, 23(5), 1301-1326.

Kotecha, R., & Simperl, E. (2024). 2024 will be the year of democracy - or disinformation. The House. https://www.politicshome.com/thehouse/article/2024-will-be-the-year-of-democracy-or-disinformation

Lee, T., & Jia, C. (2023). Curse or Cure? The Role of Algorithm in Promoting or Countering Information Disorder. In M. Filimowicz (Ed.), Information Disorder: Algorithms and Society (pp. 29-45). Routledge.

Lim, W. M. (2023). Fact or fake? The search for truth in an infodemic of disinformation, misinformation, and malinformation with deepfake and fake news. Journal of Strategic Marketing. https://doi.org/10.1080/0965254X.2023.2253805

Ling, C., AbuHilal, I., Blackburn, J., De Cristofaro, E., Zannettou, S., & Stringhini, G. (2021). Dissecting the meme magic: Understanding indicators of virality in image memes. Proceedings of the ACM on human-computer interaction, 5(CSCW1), 1-24.

Macnamara, J. (2021). Challenging post-communication: Beyond focus on a ‘few bad apples’ to multi-level public communication reform. Communication research and practice, 7(1), pp. 35–55. https://doi.org/10.1080/22041451.2021.1876404

McIntosh, E. (2019). Explainer: The rise of Canada’s right-wing meme pages. Canada's National Observer. https://www.nationalobserver.com/2019/10/17/analysis/explainer-rise-canadas-right-wing-meme-pages

Musa, B. A. (2023). Foreward. In K. Langmia (Ed.), Black Communication in the Age of Disinformation: DeepFakes and Synthetic Media (pp. vii-xi). Palgrave Macmillan.

Olivares-Delgado, F., Benlloch-Osuna, M., Rodríguez-Valero, D., & Breva-Franch, E. (2022, October). Corporate Disinformation: Concept and Typology of Forms of Corporate Disinformation. In International Conference on Design and Digital Communication (pp. 536-550). Springer Nature Switzerland.

Robins-Early, N. (2023). Disinformation reimagined: how AI could erode democracy in the 2024 US elections. The Guardian. https://www.theguardian.com/us-news/2023/jul/19/ai-generated-disinformation-us-elections

Shu, k., Wang, S., Lee, D., & Liu, H. (2020). Mining Disinformation and Fake News: Concepts, Methods, and Recent Advancements. In K. Shu et al. (Eds.), Disinformation, misinformation, and fake news in social media: Emerging research challenges and opportunities (pp. 1-22). Springer.

Silberling, A. (2023, 29 March). From Balenciaga Pope to the Great Cascadia Earthquake, AI images are creating a new reality. TechCrunch. https://finance.yahoo.com/news/balenciaga-pope-great-cascadia-earthquake-144348354.html

UNESCO. (2020). Disinfodemic: Deciphering Covid-19 Disinformation. Paris: UNESCO.

White, W. (2022). Disinformation and Scholarly Communications. Defence Strategic Communications, 11(11), 151-176.


  1. https://www.yourdictionary.com/articles/misinformation-disinformation-compare
  2. see https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf
  3. Admittedly, this article's purpose is to articulate an all-inclusive disinformation typology. To do so, it has to emphasize specific categorization criteria.
  4. McDonald's, for instance, has had to persistently respond to rumors that they use 'pink slime' in their meat. See McDonald’s no longer uses ‘pink slime,’ despite rumors. As a rumor spread innocuously out of curiosity, this would be misinformation. However, if this information were to be spread as part of an intentional campaign to harm the reputation of McDonald's, it is disinformation. See, also, TAYLOR SWIFT DEEPFAKE ADS—WHAT BRANDS NEED TO KNOW ABOUT GENERATIVE AI RISKS AFTER PHONY LE CREUSET PROMO. Taken even further, if it's misleading news (spread on purpose) by consumers, that's one thing, but if it is spread by competitors, that's quite another! In an arms-race of information warfare, new technological tools are necessary to defend against new technological threats. See, for example,  How General Mills Is Readying For Generative AI-Fueled Misinformation.
  5. An explicit example of deceitful advertising is provided by "Willy’s Chocolate Experience” -- An event billed as a 'live Willy Wonka experience' went viral at the end of February 2024 when people who paid more than $40 per ticket, expecting to be immersed in a world of pure imagination, were met with an underwhelming warehouse with sparse decor, frightening new characters, and a disheartened-looking oompa loompa. It was so disheartening that it made children cry and the police were called! The event was sold with AI generated images and content by a company called "House of Illuminati" barely 3 months old. It was clearly a case where the advertising was significantly better than the real thing./footnote]

    Academic Disinformation

    Disinformation is not something only circulating on social media platforms, of course. Beyond the fields of politics and economics, it affects academia, specifically the onslaught of new forms of 'scholarship' generation wrought by Chat-GPT, bing chat, etc. "The uncontrolled use of AI could result in the production of misleading or erroneous material, which may lead to academic disinformation and dilution of high quality research" (Chong et al., 2023, p. 401). White documented "three types of disinformation impacting scholarly communications which can be classified according to authorial intent: parodic disinformation, which mimics scholarly discourse in order to critique the publication process; opportunist disinformation, which is designed to promote the author or publisher’s scholarly image; and malicious disinformation, which seeks to distort the public perception of a scientific or sociopolitical issue" (2022, p. 153). Perhaps it is worth noting how AI engines like ChatGPT can contribute to opportunistic disinformation. It seems obvious, in the purely hypothetical case of students producing machine-generated scholarship and passing it off as their own, that this new era of academic dishonesty is in fact disinformation, especially if it is defined as "nonaccidentally misleading information" (White, 2022, p. 153). This is even more obvious when the information may be true, on its face, but is maliciously presented as if it is not just original scholarship, but is based on citations that are purely invented. "Sometimes, AI tools like ChatGPT and Bing AI “hallucinate” — a term used when they confidently answer questions with fake information. Awash in a sea of synthetic imagery, we might all be on the cusp of collectively hallucinating" (Silberling, 2023). Take, for example, ChatGPT's suggestion for a reading to help explain a week on 'Fact-Checking and Media Literacy' in a theoretical 3rd year university course dedicated to disinformation, language, and the public sphere: The suggestion was "Media Literacy in the Digital Age" edited by Belinha S. De Abreu and Paul Mihailidis but this does not exist. There is a book that sounds awfully similar, though (Media Literacy in a Disruptive Media Environment, edited by William G. Christ, Belinha S. De Abreu). Note the same-sounding title and at least one correct author. Even more curious, there is an article in the (real) book titled "News Media Literacy in the Digital Age: A Measure of Need and Usefulness of a University Curriculum in Egypt" by Rasha Allam and Salma El Ghetany, but this is another level of dissimilitude. Even more egregious results came when I prompted it to suggest academic articles from the last few years. For a prospective week on 'Disinformation and Language," it suggested Tandoc Jr, E. C., & Lee, T. T. (2020) “Fake news is not simply false information”: A review of scholarly definitions. Journalism & Mass Communication Quarterly, 97(4), 982–988. This sounded promising, until I searched for this source. I found an article titled "'Fake News' Is Not Simply False Information: A Concept Explication and Taxonomy of Online Content" but it was written by Maria D. Molina, S. Shyam Sundar, Thai Le, and Dongwon Lee, and published in 2021, Volume 65, Issue 2 of American Behavioral Scientist. Not by coincidence, I believe, one of the sources cited in this article was "Defining 'fake news'," (Tandoc E. C., Lim Z, W., Ling R. (2018).Digital Journalism, 6, 137-153). You can't trust what you read on the internet, apparently...
    10 Reasons Disinformation is Appealing | ASP American Security Project

    Context & Connections

    Clearly, just as is the case with misinformation, "social media is also fertile ground for disinformation" (Bergstrom & West, 2020, p. 29). In the following explanation, however, Bergstrom and West seem to minimize the culpability of users who spread disinformation: "Social media posts are unconstrained by most borders. And they are shared organically. When social media users share propaganda they have encountered, they are using their own social capital to back someone else's disinformation. If I come across a political leaflet or poster on the street, I am immediately skeptical. If my dear uncle forwards me a story on Facebook that he 'heard from a friend of a fiend,' my guard drops. Disinformation flows through a network of trusted contacts instead of being injected from outside into a skeptical society" (2020, p. 31). This actually makes it seem like disinformation (falsehoods) can be spread purposefully but not with malice (if, because of receiving it from 'trusted sources', we don't subject information to the same level of scrutiny that we might, if we encountered it in other circumstances). With this understanding, false information is intentionally spread, but one does not necessarily intentionally spread it with the forethought that it is false. In other words, spreading disinformation is not the exclusive purview of troll farms and disruptive foreign actors; we all contribute to it. Hmmm... Related to politics, the distinction between (the malevolence of) disinformation versus (the generally more innocuous) misinformation is clear: "Although labeling sources and knowledge as misinformation may be regarded as honest mistakes, shifting blame for lying, deceiving, and manipulating the truth may correspond to institutional distrust and cynicism among citizens who no longer trust the impartiality and honesty of the press" (Hameleers & Minihold, 2022, p. 1180). Disinformation, then, is much more explicitly connected to alarmist fears about the health of democracy, assumed to be a tactic deployed by politicians in search of political success, aligned with broad attitudes dismissing the very possibly of unbiased news reporting or honest politicians, and tied to concerns over echo bubbles and polarized, partisan realties.[footnote]It should be noted that "mis- and disinformation discourses may overlap at times: Although the discourses refer to different types of untruthfulness, they both refer to the flagging of falsehoods by politicians" (Hameleers & Minihold, 2022, p. 1187).

License

A field guide to Bullshit (Studying the language of public manipulation) Copyright © by Derek Foster. All Rights Reserved.

Share This Book

Feedback/Errata

Comments are closed.