"

2.5 Bias

Bias-In, Bias-Out

A screenshot of a Midjourney query and response. The prompt reads "the incoming class of Western University students for 2023". The image shows a large crowd of students who visually present as primarily white, slender, young, and able-bodied.
Image generated using MidJourney in 2023 in response to prompt: “The incoming class of Western University students for 2023”

Many AI models are trained on data where social biases are present. These biases are then encoded into the patterns, relationships, rules, and decision-making processes of the AI and have a direct impact on the output.

Biased data can be easy to spot, such as in this AI generated image which shows a predominantly white class of 2023 at Western, but it can also be more invisible. AI-generated text will reflect dominant ideologies, discourses, language, values, and knowledge structures of the datasets they were trained on. For example, Large Language Models may be more likely to reproduce certain dominant forms of English, underrepresenting regional, cultural, racial, or class differences (D’Agostino, 2023 ).

The ethical issue is twofold: first, the information generated by Generative AI is more likely to reflect dominant social identities, meaning that students who use AI will not be exposed to certain worldviews or perspectives, and some students may not feel that their experiences and identities are reflected in the output. Second, the use of Generative AI to produce knowledge will continue to reinforce the dominance of these ideologies, values, and knowledge structures, contributing to further inequities in representation.

As an instructor, it’s important to be aware of this limitation of AI tools. If you ask your students to use these tools, it’s also important to teach them critical AI literacies to similarly be able to identify and reflect on these issues of representation, bias and equity.

Some Generative AI companies have taken steps to correct for biases in the training data by establishing content policies or other guardrails to prevent generating biased or discriminatory output. However, these guardrails are inconsistent and can be subject to the ethical standards of each Generative AI company.

 

Screenshot of conversation with ChatGPT. Prompt: Generate an image of a barbie doll representing Germany. Response: I couldn’t generate the requested image because it didn’t comply with the content policy. This policy ensures that outputs are respectful and do not unintentionally perpetuate stereotypes or cultural misrepresentations. If you’d like, I can help explore alternative ways to represent your idea. Let me know how you'd like to proceed!
Conversation with ChatGPT in January 2025.
definition

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

AI Literacy for Higher Education Copyright © by ddilkes2 is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.