"

22 Case Analysis: Critiquing AI-Generated Essays in an Academic Writing Course

Roman Naghshi; John Drew; and Emily Pez

Topic: Students critiquing artificial Intelligence (AI)-generated essays and AI’s environmental impact

Characters: Dr. Faizollah Hosseini, a professor of Academic Writing; 30 multilingual international undergraduate students in Academic Writing 101; University Environmental Sustainability Officer

Setting:

Academic Writing 101 at Maple City University is designed to improve students’ critical thinking, analytical, and composing skills. Maple City University has a nonspecific AI policy, which states that professors must decide how they would like to engage with AI in their classrooms.

Dr. Hosseini is worried that students will turn to AI for idea generation, thus neglecting a key step in the writing process: brainstorming and coming up with ideas of their own. At the same time, he believes students need to learn about the strengths and limitations of AI in academic writing, so he has introduced a new assignment.

Pedagogical Rationale for the Assignment:

Dr. Hosseini’s assignment design was influenced by Fullan’s (1993) argument that teachers must draw on Personal Vision-Building and Inquiry in order to be agents of change. Personal Vision-Building involves teachers reflecting on their motivations and goals and aligning their personal ethics with their professional actions. Inquiry is the process of continuous questioning and learning to keep teaching methods current. Dr. Hosseini wanted to improve his instruction in the dynamic environment of education and technology and ensure that students did not fear AI or feel policed by AI detection tools. He was aware of how GPT detectors used by instructors have misidentified multilingual L2 students’ writing as AI-generated more frequently than the writing of students with English as their L1 or first language (Liang et al., 2023). He hoped the assessment would give students a space to learn how to use AI to develop their competencies as critical evaluators of writing and that it would afford increased equity for multilingual students.

Moreover, Dr. Hosseini’s process-based learning framework aligns with the theory that knowledge is constructed through an active learning process rather than being passively received (Bransford et al., 2000; Piaget, 1954). This theory supports the idea that students learn best when they are directly involved in the learning process, engaging in research and applying what they have learned in a meaningful context.

Case Narrative:

The assignment had two steps. First, Dr. Hosseini required each student to choose and interact with a generative AI tool (e.g., ChatGPT, Google Gemini, or Microsoft Co-Pilot), asking it to write an essay on a topic that the student was familiar with. Students were required to state which AI tool they used, document the entire interaction, and provide a transcript of their conversation with the AI tool.

Next, once the AI-generated essay was produced, students were asked to evaluate and critique the essay, focusing on elements such as argument, standpoints or biases, originality, structure, and style. The goal of this stage was to enhance students’ critical thinking skills and deepen their understanding of the ethical and intellectual complexity of using AI in academic writing.

While students were working on the assignment, Dr. Hosseini happened to meet the university’s Environmental Sustainability Officer, who voiced concerns about the high energy consumption of AI technologies. For instance, “GPT-3 needs to ‘drink’ (i.e., consume) a 500ml bottle of water for roughly 10-50 responses” (Li et al., 2023, p. 3) due to the cooling requirements of the hosting servers. Moreover, studies like Strubell et al.’s (2019) findings highlight the high amounts of energy required in training the AI models: training a reasonably capable language model that is similar to GPT-3 is estimated to produce carbon dioxide emissions of 626,155 lbs, which is nearly five times more than what medium-sized gasoline cars have typically consumed in their lifetimes. Realizing he had not considered these sustainability issues when designing the assignment, Dr. Hosseini decided to facilitate class discussions through which students could share their experiences, compare the performance of different AI tools, and reflect on the broader impact of AI on both education and the environment.

Observations:

After evaluating the 30 assignments submitted by his students, Dr. Hosseini observed the following:

  • Twelve students provided thorough critiques, highlighting both the strengths and weaknesses of the AI-generated essays.
    • Six of those students raised concerns about the ethical implications of using AI in academic writing while four of the students expressed worries about the environmental impact of using AI tools extensively.
  • Fourteen students struggled to identify the nuances of AI-generated content, showing either over-reliance on AI at times or underestimation of its capabilities.
  • Dr. Hosseini, to his dismay, felt that four of the students used AI to generate their entire critiques.

Feedback and Reactions from Students and Professor:

  • “There are not enough resources that Maple City has provided to me about generative AI. I don’t know what kinds of questions to ask the Chatbot to get the responses I need.”
  • “Critiquing AI essays helps me see both the strengths and limitations of AI in writing.
    • For instance, a strength is that it has helped me understand some complex concepts in my classes in ways that are more relatable to me. A limitation, though, is that the essays sometimes cite fake sources.”
  • “When I first used the AI tool, I got a text that treated all people as divided between two genders. As I continued to ask questions, I got a better paper with research sources about non-binary people.”
  • “I find it hard to critique AI work because it seems too polished.”
  • “I’m concerned about the ethics of using AI in academic settings. For example, some versions of the AI tools can reuse our data again for other purposes. I had to pay extra to sign up for the premium version of ChatGPT to keep my data safe (OpenAI, 2024). I also have more helpful chats with the premium one. This could give students who have more money an unfair academic advantage.”
  • “The AI tool I used generated an essay that contained inaccuracies about Indigenous history in North America. As a new international student in Canada, I had the privilege of learning about this history from an Indigenous Elder who taught one of my classes. I am concerned that some of my peers might not realize that AI tools can sometimes produce incorrect information, leading to potential misunderstandings in their essays.”
  • “The environmental impact of these AI tools is troubling.”
  • “I think that AI is scary because I have not used it before. I am afraid professors will think I plagiarized if I use it.”
  • “I appreciate that instructors are letting us use AI because I have lost marks for grammar mistakes in my classes.”
  • “I tried some different tools, but their interfaces had small font sizes that could not be changed, making them inaccessible to me, so I struggled with this assignment.”

The Problem:

How can Dr. Hosseini ensure that all his students develop critical skills in evaluating AI-generated content while also learning to generate ideas independently, addressing ethical considerations, and minimizing the environmental impact of AI use?

 

Case Analysis: Balancing Critical Evaluation and Sustainability in AI-Driven Assignments

This case analysis examines the integration of AI-generated essay critiques in Dr. Faizollah Hosseini’s Academic Writing 101 course for multilingual students at Maple City University. It explores how this innovative assignment can be revised to enhance student skills while addressing ethical concerns such as environmental sustainability.

Statement of the Problem

Dr. Hosseini must balance the educational benefits of critiquing AI-generated essays with ethical considerations. While Dr. Hosseini’s assignment was designed to support students’ skill development through active learning (Bransford et al., 2000; Piaget, 1954), and 40% of his students produced thorough critiques of AI-generated essays, he observed that 60% of his class struggled with the assignment. Thirteen percent of the students completely misunderstood the purpose of the assignment by using AI to write their critiques for them. Moreover, both the University Environmental Sustainability Officer and 33% of the students expressed ethical concerns with AI use, including concerns about its impact on the environment.

Thesis

Effective strategies for addressing the problem should foster critical evaluation skills, promote ethical understanding, and implement sustainable practices to reduce the environmental impact of AI. This case analysis will argue that Dr. Hosseini should redesign his assignment and create an AI digital module. These interventions could effectively enable ethical engagement with AI-generated papers while building students’ skills in critical thinking, writing, and analysis.

Decision Criteria

Based on the rationale for the assignment, student feedback, and educational theories, the solutions should

  • enhance writing skills and help students avoid over-reliance on AI for idea-generation;
  • help all students familiarize themselves with AI-generated content and develop skills in critically assessing it for argument, biases, structure, and style;
  • ensure ethical use of AI tools and address ethical issues with AI tools through consideration of accessibility and privacy policies; and
  • implement environmentally sustainable practices.

Classroom Considerations, Implementations and Rationale

Assignment Redesign

Dr. Hosseini should redesign the assignment for his next cohort by incorporating opportunities for group work. Yan (2023) found that collaborative learning in an English as a Foreign Language program for undergraduates using ChatGPT for L2 writing improved students’ competencies with the tool. To support this, Dr. Hosseini could provide students with a folder containing about six AI-generated essays. Students can work individually or in groups, fostering peer-tutoring. Those familiar with generative AI can help those who find it challenging, such as those who feel AI work is “too polished,” those who fear plagiarism accusations, and those who depended on AI to write their critiques. Moreover, Dr. Hosseini can consider using the research done by Kelly (2024) to guide students in understanding the ethical implications of AI usage in the classroom. Integrating a collaborative assignment with curated papers would reduce the number of interactions and prompts that students would have to engage with, thereby providing a more environmentally sustainable assignment, while maintaining the active learning process. If Dr. Hosseini reduced the number of prompts in this way and allowed students to analyze essays in a folder on the course website, then he also could enhance the accessibility of his course because students would not have to interact with any inaccessible generative AI interfaces. He could also address students’ privacy concerns because students would not have to give their data to tools that may reuse it for other purposes.

To ensure consistency and growth in students’ writing skills, Dr. Hosseini must demonstrate how to critique AI-generated essays and provide strategies for identifying gaps in AI writing. Addressing concerns like the lack of resources about generative AI is crucial. As Acar (2023) demonstrated, instructors’ discussions and analyses of model texts can help students understand writing in unfamiliar genres, which will be essential in bridging gaps in knowledge and skill. Dr. Hosseini could create a group work activity in which he assigns each group of students a different model research paper that was published in an undergraduate student journal, that focuses on a similar theme to one of the six AI-generated essays, and that critically examines and interrogates biases. He could give students a set of questions to help them compare features of the model texts and AI-generated texts to help them identify ways in which the AI-generated texts might effectively or ineffectively address professors’ expectations for their research papers. Students could then help each other apply what they learned about analyzing and writing strong research papers and then report back to their instructor and classmates in a discussion. Furthermore, conducting this assessment multiple times throughout the term would be beneficial, as spaced repetition improves learning and retention (Cepeda et al., 2006). Even if the value of each assessment is minimal, the skills developed are significant.

Utilization of a Digital Module

To enhance student engagement and understanding of AI, Dr. Hosseini could develop an online platform serving as a comprehensive resource hub. This digital module would provide essential tools, information, and interactive elements to enrich the learning experience.

The module would include a variety of educational resources such as tutorials and guides with step-by-step instructions on using AI tools like ChatGPT and Google Gemini, covering basics, troubleshooting, and advanced features. Additionally, video lectures would delve into AI principles, its applications in academic writing, and environmental and other ethical considerations, complemented by access to articles, research papers, and case studies on AI in education. While acknowledging that all online activities contribute to a digital carbon footprint, the module could promote sustainability by encouraging critical reflection on ways of reducing that footprint. For instance, it could apply strategies suggested by the David Suzuki Foundation (2024), including engaging with AI more selectively while choosing AI tools that support sustainable practices. Importantly, the module could be attentive to the perspectives of Indigenous Elders and thinkers on ethical AI use. For example, Dr. Hosseini could foreground questions from the Indigenous Protocol and Artificial Intelligence Working Group (2019), such as “How do we imagine a future with A.I. that contributes to the flourishing of all humans and non-humans?”

Interactive components would play a crucial role in this module. Discussion forums and padlets, moderated by Dr. Hosseini, would allow students to ask questions, share experiences, and discuss AI-generated content. Practice exercises would enable students to critique AI-generated essays, enhancing their critical evaluation skills, while quizzes and assessments would test their understanding of AI concepts and ethical considerations. Additionally, the module could include in-class debates, encouraging students to explore different perspectives on AI-related issues; an example could be the sustainability and carbon footprint of AI. This structured, interactive approach would not only improve students’ familiarity with AI but also foster a deeper, more critical engagement with the technology. Balancing the module with the in-class discussion and an assignment redesign that does not require individual students’ AI interactions could also promote environmental sustainability.

Conclusion

To address the challenges of integrating AI in academic writing, Dr. Hosseini should adopt a multifaceted approach that enhances critical evaluation skills, promotes ethical understanding, ensures equitable access to technology, and considers environmental sustainability. By redesigning assignments to include collaborative elements and reduced interactions with AI tools, providing clear guidance on critiquing AI-generated content, and developing a comprehensive digital module combined with class discussion, he can create a supportive and engaging learning environment. This holistic strategy will not only prepare students to navigate the complexities of AI in academic contexts but also foster a deeper appreciation of the ethical and environmental implications of AI technology. Through these methods, Dr. Hosseini can ensure that his students develop their skills in thoughtful, informed, and responsible AI use.


References

Acar, A. S. (2023). Genre pedagogy: A writing pedagogy to help L2 writing instructors enact their classroom writing assessment literacy and feedback literacy. Assessing Writing, 56. https://doi.org/10.1016/j.asw.2023.100717

Bransford, J. D., Brown, A. L., Cocking, R. R., Committee on Developments in the Science of Learning, Committee on Learning Research and Educational Practice, National Research Council (U.S.), & Commission on Behavioral and Social Sciences and Education (2000). How people learn: Brain, mind, experience, and school (Expanded ed.). National Academy Press.

Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed practice in verbal recall tasks: A review and quantitative synthesis. Psychological Bulletin, 132(3), 354–380. https://doi.org/10.1037/0033-2909.132.3.354

David Suzuki Foundation. (2024). How to reduce your digital carbon footprint. https://davidsuzuki.org/living-green/how-to-reduce-your-digital-carbon-footprint/

Fullan, M. G. (1993). Why teachers must become change agents. Educational Leadership, 50(6), 1–12.

Indigenous Protocol and Artificial Intelligence Working Group. (2019). Indigenous AI. https://www.indigenous-ai.net/

Kelly, J. (2024). The prompting playbook: Strategies for AI engagement. In Artificial Intelligence in Education Conference: Shaping future classrooms. eCampusOntario. https://ecampusontario.pressbooks.pub/artificialintelligenceineducationconference/chapter/the-prompting-playbook-strategies-for-ai-engagement/

Li, P., Yang, J., Islam, M. A., & Ren, S. (2023). Making AI less “thirsty”: Uncovering and addressing the secret water footprint of AI models. arXiv.Org. https://arxiv.org/abs/2304.03271

Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns, 4(7). https://doi.org/10.1016/j.patter.2023.100779

OpenAI. (2024). Privacy policy. https://openai.com/policies/privacy-policy/

Piaget, J. (1954). The construction of reality in the child (M. Cook, Trans.). Basic Books.

Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 3645–3650. Association for Computational Linguistics. https://doi.org/10.18653/v1/P19-1355

Yan, D. (2023). Impact of ChatGPT on learners in a L2 writing practicum: An exploratory investigation. Education and Information Technologies 28, 13943–13967. https://doi.org/10.1007/s10639-023-11742-4


About the authors

Roman Naghshi (he/him) holds a Master’s of Education and is an Academic Department Assistant at King’s University College. Specializing in writing pedagogy, Roman serves as a Teaching Assistant and Research Assistant across both King’s and Huron University College. Passionate about the intersection of education and AI ethics, Roman’s work focuses on creating innovative learning environments that foster critical thinking and ethical engagement with technology.

Dr. John Drew (he/him) is an assistant professor and Writing Across the Curriculum specialist in the Department of English, French, and Writing at King’s University College at Western University on the traditional and treaty lands of the Anishnaabek, Haudenosaunee, Chonnonton, and Lūnaapéewak nations. He is a fellow of the Oxford Centre for Animal Ethics and an award-winning researcher whose teaching and scholarship focus on animals and nature in literature and film; multispecies empathy and justice; children, youth, and environmental education; decolonizing and anti-oppressive pedagogies; and writing and social change within the climate emergency. His work has been published or is forthcoming in Environmental Humanities, Journal of Childhood Studies, Humanimalia, Childhood Geographies, and Animal Studies Journal. His book, Animals in Literary Education: Towards Multispecies Empathy, is being published by Springer in the Palgrave Animal Ethics Series.

Dr. Emily Pez (she/her) loves teaching Writing courses part-time and tutoring at King’s University College, in Deshkan Zibiing territory. She is a European settler-descended speaker of English as a first language, from her mother’s side, with Italian as her second language, from her father. Her work experiences have mainly been with multilingual students, and they are a constant source of inspiration, learning, and joy for Emily.

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Case Analysis: Critiquing AI-Generated Essays in an Academic Writing Course Copyright © 2025 by Roman Naghshi; John Drew; and Emily Pez is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.