"

2.6 Misinformation, Disinformation, & Mal-information

Misinformation, Disinformation, & Mal-information (MDM)

Because of the ability to generate plausible information at scale, generative AI technologies have exacerbated the potential harms of misinformation, disinformation and mal-information in a time of information abundance. The generation or dissemination of fake, inaccurate, or misleading information through generative AI could be either unintentional or deliberate.

Misinformation refers to inaccurate or false information that is shared without intending to create harm. This could occur as a result of generative AI users not verifying generative AI outputs before sharing them.

Mal-information refers to information that may be rooted in truth or fact, but removed from context or distorted in ways that can mislead. When using generative AI, this might be the result of inaccurate outputs or hallucinations. Generative AI could also be used by malicious individuals to distort information in a way that is plausible.

Disinformation refers to inaccurate or false information that is shared with malicious intend, to mislead recipients or manipulate decision-making or perspectives. Generative AI could be used to generate fake news stories, fake datasets, or otherwise employed in attempts to deceive at large scales.

(Canadian Centre for Cyber Security, 2024; Jaidka et. al., 2025)

One of example of this is the use of text-to-image and text-to-video generative AI tools to produce visual media for the purposes of (malicious or not) deception. A deepfake is the product of generative AI that creates a believable but fake video, audio, or image. They often feature real people saying or doing something that they didn’t really say or do. Deepfakes do have potential benefits for the arts, for social advocacy, for education and for other purposes, but they do present ethical issues because often permission has not been received to use the person’s likeness and because it has the potential to spread misinformation or to mislead people.

Ethical Case Study: Misinformation & Deception

One of your course assignments asks students to produce a piece of speculative fiction reflecting on the future if immediate action isn’t taken in response to Climate Change. One student creates a video of a news report showing the world in crisis. Within the video, they have deep fakes of several world leaders justifying their lack of action over the past 10 years.

What ethical considerations are there around this use of AI?

Feedback

Deepfakes present a few important ethical issues, particularly with regards to misrepresentation, intention to deceive, and politics and political agendas. In this case, the student wasn’t necessarily attempting to deceive viewers, but it’s important to help students understand the ethics of generative AI and the potential harms if you allow or encourage AI use in your courses.

 

definition

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Domains of AI-Awareness for Education Copyright © 2025 by Dani Dilkes is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.

Feedback/Errata

Comments are closed.