Post-secondary Specific Limitations and Risks

While the limitations and risks outlined earlier in this chapter also apply to the post-secondary context, there are several risks specific to our University environment worth considering, specifically supporting academic integrity and equitable access.

(Re)defining academic integrity and academic dishonesty

McMaster’s Academic Integrity Policy defines academic dishonesty as “to knowingly act or fail to act in a way that results or could result in unearned academic credit or advantage” and that “it shall be an offence knowingly to … submit academic work for assessment that was purchased or acquired from another source.”

In an article describing how he integrated generative AI into writing assignments, Paul Fyfe observes,

“[C]omputer- and AI-assisted writing is already deeply embedded into practices that students already use. The question is, where should the lines be drawn, given the array of assistive digital writing technologies that many people now employ unquestioningly, including spellcheck, autocorrect, autocomplete, grammar suggestions, smart compose, and others […] within the spectrum of these practices, what are the ethical thresholds? At what point, in what contexts, or with what technologies do we cross into cheating?”[1]

He continues, “educational institutions continue to define plagiarism in ways that idealize originality.”[2] In this observation, Fyfe highlights a recurring theme in the literature around academic integrity and artificial intelligence, that is: with these technologies the defined boundaries of independent work have become porous, and the contrast between “humanity originality and machine imitation”[3] blurs.

The result of this shift in understanding is a call within the literature to reexamine, and perhaps redefine, what constitutes plagiarism, academic integrity and academic dishonesty, with some authors arguing that “Academic integrity is about being honest about the way you did your work”[4], others urging a defended boundary of primarily individual effort[5], and still others arguing for a new framework entirely – what Sarah Eaton calls ‘post plagiarism’ through a norm of human hybrid writing.

Where most of the reviewed literature holds consensus is that using generative artificial intelligence does not automatically constitute academic misconduct[6], but rather, to quote the European Network for Academic Integrity, “Authorised and declared usage of AI tools is usually acceptable. However, in an educational context, undeclared and/or unauthorised usage of AI tools to produce work for academic credit or progression (e.g. students’ assignments, theses or dissertations) may be considered a form of academic misconduct.”[7]

Detection

Questions around detecting AI generated writing fall into:

  1. the technological – is it possible to reliably detect AI-generated writing?
  2. the philosophical – is the role of the educator one of trust or one of surveillance?, and 
  3. the existential – what is the value of a university degree if the academic labour behind it is uncertain?

There are not yet reliable detection tools. Those that are available – GPTZero, Turnitin, Originality.ai, etc – have been found to misidentify original student content as AI generated, with some findings demonstrating that “these detectors consistently misclassify non-native English writing sample as AI-generated, whereas native writing samples are accurately identified.”[8]

Moreover, students have not consented to having their work submitted to these tools, with open questions related to data privacy and security.[9]

While technology and a perceived ‘arms race’ between detection and AI tools pose their own challenges.[10], there are also questions about the role of educators and their assumptions about students as learners. With significant evidence pointing to student academic misconduct on the rise, particularly over the pandemic, there are arguments that “we must prioritize student learning above catching cheaters”[11] and that understanding why students engage in academic misconduct may point to approaches to reduce these behaviours. Indeed, the instances of academic dishonesty and opportunities to cheat predate generative AI; what the tools introduce is “ease and scope”[12] that amplifies an existing challenge.

Students’ self-reported reasons for academic misconduct include performance pressure, high stakes exams, overwhelming workload, being unprepared, feeling ‘anonymous’, increased opportunities to cheat enabled by technology, peer acceptance of cheating, misunderstanding plagiarism, and feeling like it will go unpunished. This research brief on why students cheat summarizes research findings that argue for a reduction in academic dishonesty when students are both clear about what constitutes academic integrity/academic dishonesty, what the expectations are for their academic work and a felt perception of mutual benefit for behaving with integrity rather than competition with other students. In short, “Students are more likely to engage in academic misconduct when they are under pressure, when there is an opportunity, and when they are able to rationalize it.”[13]

Instead of positioning the educator as one to detect and survey, these pieces suggest the role be one of designing authentic and scaffolded assessments and explaining and exploring academic integrity with students.

Within these proactive strategies for cultivating academic integrity is an implied sense of time and scale – that is, these strategies imagine instructors have sufficient time, resources and energy to update or redevelop courses and assessments. Providing scalable, supported and realistic assessment redesign will be one of the ongoing areas of need for educators as generative AI is integrated into more tools and more courses.  A later chapter in this book focuses specifically on strategies you might take to redesign assessment to promote academic integrity.

Equitable Access

Cost of tools poses a barrier for many students in accessing generative AI tools. With many tools currently available for free, some of these – like ChatGPT – have paid tiers with significant improvements in functionality and performance for paid subscribers. Those students who can afford to pay for paid tiers may be disproportionality advantaged in assignments that incorporate the use of generative AI.

As educators we need to design activities that encourage the use of free versions. For instance, Microsoft’s Bing, used in creative mode, draws on GPT-4, the same model that powers the paid version of ChatGPT. Designing assessments that draw on these free versions will make access for all students easier, even while there are continuing inequities in terms of internet availability, cost and speed.

That said, if students are learning online from other countries, some particular tools, like ChatGPT may be restricted due to government regulation or censorship. Attention to this possibility may mean allowing some students to opt-out of assignments that use generative AI, or providing alternatives for their engagement.

Finally, the intersection of generative AI and students with disabilities is an area of emerging research; we aim to add more information about generative AI as assistive technology in the coming months.

 


License

Icon for the Creative Commons Attribution 4.0 International License

Generative Artificial Intelligence in Teaching and Learning at McMaster University Copyright © 2023 by Paul R MacPherson Institute for Leadership, Innovation and Excellence in Teaching is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book