Academic Integrity & Equitable Access

While the limitations and risks outlined earlier in the previous chapter also apply to the post-secondary context, there are several risks specific to our College environment worth considering, specifically supporting academic integrity and equitable access.

(Re)defining academic integrity and academic dishonesty

Many college community members are understandably concerned about the implications of generative AI tools and academic integrity. Appropriately using and citing the work of others, building on that with one’s own critiques and ideas, and clearly and correctly representing one’s own work and contributions, are some of the aspects of academic integrity that use of generative AI could impact.

In an article describing how he integrated generative AI into writing assignments, Paul Fyfe observes,

“[C]omputer- and AI-assisted writing is already deeply embedded into practices that students already use. The question is, where should the lines be drawn, given the array of assistive digital writing technologies that many people now employ unquestioningly, including spellcheck, autocorrect, autocomplete, grammar suggestions, smart compose, and others […] within the spectrum of these practices, what are the ethical thresholds? At what point, in what contexts, or with what technologies do we cross into cheating?”[1]

He continues, “educational institutions continue to define plagiarism in ways that idealize originality.”[2] In this observation, Fyfe highlights a recurring theme in the literature around academic integrity and artificial intelligence, that is: with these technologies the defined boundaries of independent work have become porous, and the contrast between “humanity originality and machine imitation”[3] blurs.

The result of this shift in understanding is a call within the literature to reexamine, and perhaps redefine, what constitutes plagiarism, academic integrity and academic dishonesty, with some authors arguing that “Academic integrity is about being honest about the way you did your work”[4], others urging a defended boundary of primarily individual effort[5], and still others arguing for a new framework entirely – what Sarah Eaton calls ‘post plagiarism’ through a norm of human hybrid writing.

Where most of the reviewed literature holds consensus is that using generative artificial intelligence does not automatically constitute academic misconduct[6], but rather, to quote the European Network for Academic Integrity, “Authorised and declared usage of AI tools is usually acceptable. However, in an educational context, undeclared and/or unauthorised usage of AI tools to produce work for academic credit or progression (e.g. students’ assignments, theses or dissertations) may be considered a form of academic misconduct.”[7]

Citation

Besides students using generative AI tools in an unauthorized manner to complete assignments, another consideration related to academic integrity relates to the importance of accurately attributing and citing the words and ideas of others in one’s own work, as part of scholarly integrity. Some text-generation tools, such as ChatGPT, don’t cite the sources of the information in the text they generate, or if they do, these may be fictional. Students using such tools, say for idea generation, would not have access to the original sources in order to properly cite those in their own work. They may therefore inadvertently include ideas from other sources in their written work that should be cited, and could have been if the students did the research themselves. It would be useful to talk with students about this situation, noting that they should always verify information from generative AI tools as there can be errors, and by verifying they can also provide their own sources. (Note: some generative AI tools connected to the internet, such as Bing Chat and Perplexity, do link to sources for information provided.)

By fully and correctly citing AI generated content, you are: 

  • Upholding the principle of academic integrity by giving due credit to your sources
  • Providing transparency about the origin of the information, which can have a different bias profile than human-generated content
  • Contributing to the important task of tracking the influence of AI in our collective knowledge, highlighting its role in various domains

 

Detection

Questions around detecting AI generated writing fall into:

  1. the technological – is it possible to reliably detect AI-generated writing?
  2. the philosophical – is the role of the educator one of trust or one of surveillance?, and 
  3. the existential – what is the value of a university degree if the academic labour behind it is uncertain?

There are not yet reliable detection tools. Those that are available – GPTZero, Turnitin, Originality.ai, etc – have been found to misidentify original student content as AI generated, with some findings demonstrating that “these detectors consistently misclassify non-native English writing sample as AI-generated, whereas native writing samples are accurately identified.”[8]

Moreover, students have not consented to having their work submitted to these tools, with open questions related to data privacy and security.[9]

While technology and a perceived ‘arms race’ between detection and AI tools pose their own challenges.[10], there are also questions about the role of educators and their assumptions about students as learners. With significant evidence pointing to student academic misconduct on the rise, particularly over the pandemic, there are arguments that “we must prioritize student learning above catching cheaters”[11] and that understanding why students engage in academic misconduct may point to approaches to reduce these behaviours. Indeed, the instances of academic dishonesty and opportunities to cheat predate generative AI; what the tools introduce is “ease and scope”[12] that amplifies an existing challenge.

Students’ self-reported reasons for academic misconduct include performance pressure, high stakes exams, overwhelming workload, being unprepared, feeling ‘anonymous’, increased opportunities to cheat enabled by technology, peer acceptance of cheating, misunderstanding plagiarism, and feeling like it will go unpunished. In short, “Students are more likely to engage in academic misconduct when they are under pressure, when there is an opportunity, and when they are able to rationalize it.”[13]

Instead of positioning the educator as one to detect and survey, these pieces suggest the role be one of designing authentic and scaffolded assessments and explaining and exploring academic integrity with students.

Within these proactive strategies for cultivating academic integrity is an implied sense of time and scale – that is, these strategies imagine instructors have sufficient time, resources and energy to update or redevelop courses and assessments. Providing scalable, supported and realistic assessment redesign will be one of the ongoing areas of need for educators as generative AI is integrated into more tools and more courses.  A later chapter in this book focuses specifically on strategies you might take to redesign assessment to promote academic integrity.

 

Equitable Access

Cost of tools poses a barrier for many students in accessing generative AI tools. With many tools currently available for free, some of these – like ChatGPT – have paid tiers with significant improvements in functionality and performance for paid subscribers. Those students who can afford to pay for paid tiers may be disproportionality advantaged in assignments that incorporate the use of generative AI.

As educators we need to design activities that encourage the use of free versions. For instance, Microsoft’s Bing, used in creative mode, draws on GPT-4, the same model that powers the paid version of ChatGPT. Designing assessments that draw on these free versions will make access for all students easier, even while there are continuing inequities in terms of internet availability, cost and speed.

That said, if students are learning online from other countries, some particular tools, like ChatGPT may be restricted due to government regulation or censorship. Attention to this possibility may mean allowing some students to opt-out of assignments that use generative AI, or providing alternatives for their engagement.

Finally, the intersection of generative AI and students with disabilities is an area of emerging research; we aim to add more information about generative AI as assistive technology in the coming months.

 

Attributions

This page has been adapted from:

Generative Artificial Intelligence in Teaching and Learning at McMaster University Copyright © 2023 by Paul R MacPherson Institute for Leadership, Innovation and Excellence in Teaching is licensed under a Creative Commons Attribution 4.0 International License

ubc https://ctlt.ubc.ca/resources/assessment-design-in-an-era-of-generative-ai/academic-integrity/

Future Facing Assessments by Eliana Elkhoury and Annie Prud’homme-Généreux is licensed under CC BY 4.0

Why Cite Generated Content? https://www.queensu.ca/ctl/resources/educational-technology/generative-ai-teaching-and-learning/academic-integrity

This resource was remixed from the McMaster University Library and adapted to Queen’s University context. All content is licensed under CC-BY-SA.


License

Icon for the Creative Commons Attribution 4.0 International License

Generative Artificial Intelligence in Teaching and Learning Copyright © 2023 by Centre for Faculty Development and Teaching Innovation, Centennial College is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book