10.2: AI literacy and bias

Challenges and ethical considerations of AI

Although AI itself has been around for decades, there hasn’t been a significant amount of research on AI in the public sphere, and much of it is quite recent. You will see throughout this chapter a lot of mention of ChatGPT, as this tool is widely publicized and fairly accessible, and is currently used not only by marketing students and professionals in a number of ways, but the ChatGPT API fuels many AI resources used in market research. UNESCO published a ChatGPT and Artificial Intelligence in higher education quick start guide that includes some excellent considerations for both students and anyone interested in learning more about AI, and specifically ChatGPT.

UNESCO shares some challenges and ethical considerations that are helpful to understand and explore when considering using any AI tool:

Lack of regulation

At the time of publication [2023], ChatGPT is not currently regulated. The extremely rapid development of ChatGPT has caused apprehension for many, leading a group of over 1,000 academics and private sector leaders to publish an open letter calling for a pause on the development of training powerful AI systems.This cessation would allow time for potential risks to be investigated and better understood and for shared protocols to be developed.

Privacy concerns In April 2023, Italy became the first country to block ChatGPT due to privacy related concerns.The country’s data protection authority said that there was no legal basis for the collection and storage of personal data used to train ChatGPT. The authority also raised ethical concerns around the tool’s inability to determine a user’s age, meaning minors may be exposed to age-inappropriate responses. This example highlights wider issues relating to what data is being collected, by whom, and how it is applied in AI.

Cognitive bias

It is important to note that AI  is not governed by ethical principles and cannot distinguish between right and wrong, true and false. AI tools like ChatGPT only collect information from the databases and texts it processes on the internet, so it also learns any cognitive bias found in that information. It is therefore essential to critically analyse the results it provides and compare them with other sources of information.

Gender and diversity

Concerns about gender and other forms of discrimination apply to  all forms of AI. On the one hand, this reflects the lack of female participation in subjects related to AI and in research/development on AI and on the other hand, the power of generative AI to produce and disseminate content that discriminates or reinforces gendered and other stereotypes.

Accessibility

There are two main concerns around the accessibility of AI, particularly related to ChatGPT. The first is the lack of availability of the tool in some countries due to government regulations, censorship, or other restrictions on the internet. The second concern relates to broader issues of access and equity in terms of the uneven distribution of internet availability, cost and speed. In connection, teaching and research/development on AI has also not been evenly spread around the world, with some regions far less likely to have been able to develop  knowledge or resources on this topic.

Assessing AI tools

“In the era of generative AI, one of the most important resources is data. Data, whether labeled or unlabelled, is crucial when training AI models to ‘learn’ how to complete their intended task(s). However…  the procedures that surround data acquisition and dataset formation are largely unregulated by any legal entity,” (Fokam, 2023).

Marketers cannot typically understand how the datasets used to train AI are collected, but much of its source data uses “…content from the internet content from the internet, which is unfiltered and can contain hate speech or discriminatory language towards racial and gender minorities. In other words, generative AI, especially in large language models like ChatGPT,  can inadvertently perpetuate certain biases and amplify existing socioeconomic disparities,” (Fokam, 2023).

When thinking about whether and how to use AI tools to complete a task, one needs to consider the limitations of AI as well as the opportunities to support faster, more efficient and possibly more accurate output.

There are a number of frameworks for assessing digital and AI tools, including the the ROBOT test  developed by The LibrAIry :

Reliability; Objective; Bias; Ownership; Type

Reliability
  • How reliable is the information available about the AI technology?
  • If it’s not produced by the party responsible for the AI, what are the author’s credentials? Bias?
  • If it is produced by the party responsible for the AI, how much information are they making available?
    • Is information only partially available due to trade secrets?
    • How biased is they information that they produce?
Objective
  • What is the goal or objective of the use of AI?
  • What is the goal of sharing information about it?
    • To inform?
    • To convince?
    • To find financial support?
Bias
  • What could create bias in the AI technology?
  • Are there ethical issues associated with this?
  • Are bias or ethical issues acknowledged?
    • By the source of information?
    • By the party responsible for the AI?
    • By its users?
Owner
  • Who is the owner or developer of the AI technology?
  • Who is responsible for it?
    • Is it a private company?
    • The government?
    • A think tank or research group?
  • Who has access to it?
  • Who can use it?
Type
  • Which subtype of AI is it?
  • Is the technology theoretical or applied?
  • What kind of information system does it rely on?
  • Does it rely on human intervention?

 

Resources

Fokam, G. M. (2024, January 3). Breaking the Binary: Navigating Generative AI, Feminism, and Racial Equity in the Era of Digital Redlining. OER Commons. CC BY-NC-SA 4.0 DEED

Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool]. The LibrAIry. CC BY-NC-SA 4.0 DEED

Silberg, J., & Minyaka, J. (2019, June). Generative AI and the future of work in America. McKinsey & Company.

United Nations Educational, Scientific and Cultural Organization. (2023). ChatGPT and Artificial Intelligence in higher education quick start guide. CC-BY-SA 3.0 IGO

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Introduction to Market Research Copyright © by Julie Fossitt is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.

Share This Book