4.1 Designing Assessments with Integrity
Designing assessments with integrity, especially with AI, means creating evaluations that promote honesty, fairness, and transparency. This involves either designing assessments and rubrics to discourage and mimic the use of AI or ensuring that AI tools are used ethically to support learning while also teaching students to assess AI outputs and understand their limitations critically.
Assessment Design and AI
Effective assessment design is vital to maintaining academic integrity. Regarding AI use, assessments should be designed to minimize opportunities for dishonesty and encourage genuine learning. Here are some ideas to consider when designing assessments:
Select the accordion items below to learn more about assessment design and academic integrity.
Reflection: One Faculty’s Perspective
Review the Hub post, Embedding AI in Activities and Assessments: A 5 for 1 Deal by Anita Nickerson and reflect.
- What is the level of permission for AI Use?
- How does the professor help students learn about the appropriate citation of AI?
- How does the professor help students learn about the appropriate use of AI?
- How does the professor help students learn about the appropriate disclosure of use of AI?
Learn more
Learn more about Smart Solutions: Authentic Assessments for Gen AI-Era Academic Integrity (Faculty Learning Hub).
Set a Level of Permitted Use of AI
AI integration in education can vary from full permission to strict prohibition. It’s essential to strike a balance that maximizes the benefits of AI while addressing concerns about academic integrity and fairness.
Full Permission | Selective Permission | Prohibition |
Encourages using AI for various tasks, such as brainstorming, drafting, and revising assignments. This approach requires clear guidelines and modelling of ethical AI use | Allows AI to be used for specific tasks or stages of an assignment, such as initial research or generating ideas, but not for final submissions. | Restricts AI use entirely, often due to concerns about cheating or undermining learning outcomes. This approach may be necessary for specific assessments but should be accompanied by clear rationales and alternative support mechanisms. |
While it may seem more straightforward to prohibit AI entirely, AI can be a powerful tool for cognitive offloading and handling routine tasks, allowing students to focus on higher-order thinking skills. For example, AI can assist in data organization or initial idea generation, freeing students to concentrate on analysis and synthesis. You may wish to consider identifying specific tasks where AI use is appropriate and beneficial, ensuring that students still demonstrate the intended learning outcomes.
Factors affecting the permission level you set will include college guidelines, appropriateness for learning outcomes, the tenability of AI use in the assignment, and support of students to use AI safely and responsibly. See the checklist below to help you select an AI permission level.
Setting Your Permitted AI Use Level: A Checklist
How do you know what permission level to set for your course or assignments? Factors that may shape your decision for full or selective permission:
- the learning outcome provides opportunities for AI to support, rather than replace, learning
- the use of AI provides accessible and inclusive learning for all students
- the assignment and its evaluation criteria make AI use tenable (rather than vulnerable to prohibited or inappropriate use)
- the assignment’s format makes AI use possible
- you can guide and support students to use AI safely and responsibly
- the assignment’s instructions make it clear if and how AI use is permitted (and not permitted)
- the assignment includes a way of disclosing appropriate AI use, where relevant
- the risks and ethical considerations of using AI have been accounted for
Learn more
Learn more about The Optional Use of GenAI in Assessments (Faculty Learning Hub). See a two-part series on AI-Adapting Assignments: Restricting AI Use (Part 1) and Encouraging AI Use (Part 2).
Getting Started with Selective Permission of AI Use
Faculty who wish to encourage AI use in measured ways may wish to start with small, targeted changes to incorporate AI into the assessment. These minor adjustments can significantly enhance student engagement and learning outcomes without requiring a complete overhaul of existing assessment methods.
Here are some examples:
- Brainstorm ideas (and identify which idea was used)
- Formulate an outline (and ask for a copy of the AI-generated outline)
- Create an image (and ask for the prompt)
- Locate some information (and ask for the prompt)
- Explain a concept (and ask for the output or a citation of the output)
- Ask for feedback (and ask for the results)
Clarify How, How Much, and When to Use AI
Be clear about how you invite students to use AI for learning and how much or at what point you want students to “pick up” or “put down” generative AI. By being transparent about the permitted ways to use AI and in what part of learning tasks or assignments, you help students understand its value for specific tasks and risks for others.
How | How Much | When |
Do you want students to use AI as:
|
Do you want students to engage with AI:
|
Do you want students to develop their assignments with AI:
|
Reflecting on the Spectrum of Student AI Use for Learning Tasks
Review the job aid, Spectrum of Student AI Use for Learning Tasks. You may download a copy.
Choose a column and a row to identify the kinds of tasks that you might ask students to do with AI based on the type of activity.
Ask yourself:
- What is the type of learning activity or task that AI may support in my classroom? How do I want students to use AI? (see Column 1)
- What are the contexts of the task that make AI use for that activity desirable or feasible in my classroom? How much do I want students to use AI? (see Row 1)
- What will I need to do to provide clear expectations, instructions, and support for using AI in this way and to this extent?
Try This! Using the AI Assessment Scale
The AI Assessment Scale Revisited: A Framework for Educational Assessment (Perkins, Roe, & Furze, 2024) describes a 5-level approach to permitted generative AI use. This chart summarizes the AI Assessment Scale, as described in more detail:
Permitted Level | Explanation |
No AI | You must not use AI at any point during the assessment. You must demonstrate your core skills and knowledge. |
AI Planning | You may use AI for planning, idea development, and research. Your final submission should show how you developed and refined these ideas. |
AI Collaboration | You may use AI to assist with specific tasks such as drafting text and refining and evaluating your work. You must critically evaluate and modify any AI-generated content you use. |
Full AI | You may use AI extensively throughout your work as you wish or as directed. Focus on directing AI to achieve your goals while demonstrating your critical thinking |
AI Exploration | You should use AI creatively to solve the task, potentially co-designing new approaches with your professor. |
AI Assessment Scale summary CC BY-NC-SA
Learn More
For more information on guiding students at what point in the learning task or assignment development process you may wish for students to use AI, see Start, Continue, or End Learning with AI.
Scaffolding Assignments with Multiple Formative Assessment Types
You can use AI to create a range of knowledge checks, scenarios, and other types of assignments based on a learning outcome or goal. This is because LLMs are good at pattern recognition and repetition, which means they can quickly generate content variations using different question structures.
Multiple Assessment Types Power Prompt, adapted from Dr. Phillipa Hardman (2024)
Create multiple suggestions on how to assess [learning outcome]. You must generate:
- 5 multiple choice questions, each question whose stem is a simple question with one correct answer and the remaining plausible distractors
- 5 scenario-based questions and activities, in which the scenario is [number] words long and the questions increase in difficulty
- 5 application-level questions and activities, in which ideas are applied to a hyper-specific context, organization, area of student interest, etc.
For each question you must:
- Provide sample answers and common errors or misconceptions.
- State the related Bloom’s taxonomy domain of learning and level.
- Explain the conditions in which it will be most effective, e.g., type of learner, mode of delivery, etc.
Designing Grading Tools with AI
AI can assist in designing assessments and grading tools (such as checklists or rubrics) that are fair, transparent, and aligned with learning outcomes.
- Rubric creation: AI can help generate detailed rubrics based on learning outcomes and key performance indicators. This ensures consistency and clarity in grading.
- Using data analytics: AI can analyze assessment data to identify general trends and patterns in classwide performance. This information can be used to improve teaching strategies and tailor instruction to meet student needs better.
- Formative assessments: AI can help create formative (non-graded or low-stakes) assessments that provide ongoing insights into student learning. This allows educators to make timely interventions and support students throughout the learning process.
Caution
Conestoga’s AI Do’s and Don’ts document guides faculty in avoiding grading student work with AI. This means not putting student work into a generative AI tool, including Copilot. While you may wish to review your own feedback to students using generative AI to help mitigate bias in grading and ensure all students are evaluated on the same criteria, use of Copilot should not extend to submitting student work to Copilot.
Assessment Items Related to AI Use
Are you looking to encourage or discourage the use of AI in a rubric? See these Structural Patterns of LLM-Generated Content – A Checklist (Adapted from Steere, 2024)
- AI-written content tends to get straight to the point
- AI-generated content often creates lists
- AI uses formulaic transitional phrases
- AI-generated work tends to remain in the third-person
- AI-generated essays are often repetitive
- AI writing often contains exaggerated and “flowery” prose
Learn More
See more examples in this bank of sample rubric items to discourage or encourage AI use: AI Savvy Rubrics for Writing Assignments (Faculty Learning Hub).
Try This! Exploring Copilot Prompts for Assessments and Feedback
Use Copilot to explore various aspects of assessments. Follow the steps below for each activity. Insert search string information where required.
Rubric Creation
Use Copilot to create a detailed, step-by-step rubric for an upcoming assignment.
- Step 1: Identify the assignment’s learning outcomes and key performance indicators. For a research paper, the learning outcomes might include critical thinking, research skills, and writing proficiency.
- Step 2: Input these criteria into Copilot. Enter criteria such as “clarity of argument,” “use of sources,” and “grammar and style.”
- Step 3: Review and refine the generated rubric to ensure it meets your expectations. Adjust the weight of each criterion based on its importance to the overall assignment.
Sample Prompt:
“Copilot, create an analytic rubric using the ‘5+2 Level’ structure for my [insert course name] course. Begin by identifying the learning outcomes [LO1, LO2] being assessed and the specific criteria to be evaluated in the assessment {Criterion 1, 2, 3, 4]. Next, ensure the rubric includes 5 distinct levels: Incomplete (Zero), Unacceptable, Developing, Acceptable (PASS), Emerging, Accomplished, and Exceptional. Next, set up the rubric structure. Assign total point values for each criterion, using multiples of 5, with a maximum of 20 points per criterion. Next, provide a detailed description for each criterion level. Ensure the description is specific, concrete, succinct, and aligned with the criterion and the level. Next, set point values using the provided charts, use the bolded number in each cell as the default point value, and consider including a range for greater differentiation within a level. Finally, format the analytic rubric using a table format, including all levels and grades on the x-axis and all criteria descriptors on the y-axis.”
Supporting Bias Reduction in Feedback
Implement Copilot to analyze grading patterns and identify potential biases.
- Step 1: Collect a sample of feedback from graded assignments.
- Step 2: Use Copilot to analyze the feedback given. Review the Copilot-generated report to identify any patterns of bias or trends in comments.
- Step 3: Provide feedback on constructive, supportive comments with examples and invitational reflection questions.
Sample Prompt:
“Please analyze the following sample of feedback I have provided to students on their graded assignments:
<Insert your feedback samples here>
Please thoroughly review this feedback and identify any patterns of bias or recurring trends in my comments. Based on this analysis, provide detailed suggestions on how to make my feedback more constructive and supportive, including specific rewritten examples where applicable. Additionally, I will offer invitational reflection questions to help me critically assess and improve my feedback practices moving forward.”
Using Data Analytics
Leverage Copilot to analyze assessment data and improve teaching strategies.
- Step 1: Gather anonymized data from recent assessments.
- Step 2: Input the data into the College’s licensed version of Copilot.
- Step 3: Review the trends and patterns identified by Copilot. Identify areas where students struggle, such as specific topics or question types.
- Step 4: Use these insights to adjust your teaching strategies and address areas where students struggle. This may include incorporating more interactive activities or additional resources for challenging topics.
Sample Prompt:
“Please analyze the following anonymized data from recent assessments:
[Insert anonymized test scores, assignment grades, etc.]
First, carefully review the trends and patterns identified, particularly focusing on areas where students consistently face challenges, such as specific topics or question types. Based on these insights, provide suggestions for adjusting my teaching strategies to better support student learning, including incorporating more interactive activities or additional resources for the identified challenging areas.”
Formative Assessments Creation
Create formative assessments using Copilot to provide insights into student learning.
- Step 1: Identify key concepts that need to be assessed. Focus on core concepts like thesis development, evidence integration, and analytical writing.
- Step 2: Use Copilot to generate formative assessment questions. Generate multiple-choice questions, short answer prompts, or essay topics.
- Step 3: Administer these assessments regularly to monitor student progress.
- Step 4: Use the results to make timely interventions and support students’ learning.
Sample Prompt:
“Using Copilot, create formative assessments tailored to my specific learning needs [insert course name]. Begin by identifying key concepts that need to be assessed, such as [insert core concepts like thesis development, evidence integration, analytical writing, etc.]. Based on the identified concepts, recommend a few specific types of formative assessments. For example, multiple-choice questions could assess understanding of thesis development by asking students to identify the strongest thesis statement from a set of options. Short answer prompts could assess evidence integration by having students briefly explain how a piece of evidence supports an argument. Essay topics could evaluate analytical writing by requiring students to construct a well-supported analysis of a given text or topic. Next, recommendations on the optimal timing and methods for administering these assessments will be provided. For instance, suggest administering multiple-choice questions early in the unit to gauge initial understanding and using essay topics later as a summative check of students’ analytical skills. Finally, before creating the formative assessments, ask whether these ideas are satisfactory or if more information is needed to refine the approach further.”
Formative Assessments Creation for an Upcoming Assignment
Use Copilot to create prompts for formative feedback assessments for an upcoming assignment.
“You are an expert in creating prompts for formative feedback assessments in higher education using AI tools like Copilot. I want an interactive lesson focused on formulating effective prompts for generating formative feedback assessments that support student learning. Please give me key suggestions for improving my prompting technique when creating these assessments. After offering the suggestion, ask me a question to check my understanding. Wait for my response before proceeding. If my answer is incorrect, please correct me or provide further guidance with examples. Ensure that all questions are related to using formative feedback assessments to enhance teaching and learning in higher education. Do not give me more information until I confirm I understand or have answered correctly. As we go through the lesson, please provide me with examples of how well-formulated prompts can lead to effective feedback mechanisms and how these assessments can be used to monitor student progress and intervene when necessary. Conclude the session by summarizing what I have learned, highlighting my strengths and offering advice on continuing to practice and refine my prompting skills for creating impactful formative feedback assessments.”
Inquiry-based feedback to support independent learning
Use Copilot to refine feedback on student work when you want to prompt students to think differently, using a non-directive, inquiry-based approach that encourages reflection. Such feedback focuses on question-asking, and Copilot can assist in forming the kinds of questions you might ask depending on the assessment type and the issues you are responding to in students’ work.
Sample prompt: “Imagine you are a college professor providing feedback to students on [insert assignment type here]. You notice students have [insert issue that you’d like students to revise, improve, or do differently]. Provide feedback to students using questions. Be non-directive and curious.”