"

Evaluation Simulation

Chapter 2: Definition of Evaluation Measures Around Simulation

Formative Evaluation 

Formative evaluation focuses on improving performance and behaviour in three learning domains: skills, knowledge, and attitude. The formative assessment shall be delivered considering the participants’ experience level and consistency; it will include constructive feedback to support the participants in meeting the expected outcomes. It could be provided through coaching, cueing, or concept mapping. On top of everything, it shall offer supplemental strategies to the participant to achieve the expected outcomes (Sando et al., 2013).

Formative Assessment: https://www.youtube.com/watch?v=-RXYTpgvB5I

Summative Evaluation

Summative evaluation is an essential measure of one’s performance or competence at the end of a predetermined period. It is standardized in format and scoring methods and performed in a familiar environment by trained and objective observers. The evaluation is based on pre-established guidelines about participant errors and is evaluated for validity and reliability. Additionally, it should include guidelines for cueing, predetermined parameters for terminating the scenario before its completion, pre-established criteria to rate the participant(s), and self-assessment by the participant when required (Sando et al., 2013).

Summative Assessment: https://www.youtube.com/watch?v=SjnrI3ZO2tU

High-stakes Evaluation

Higher-stakes assessments in simulation can be a great way to assess an individual’s core competencies. Using standardized checklists can help enhance the reliability of the evaluation. These checklists can focus on evaluating specific skills and behaviours, which will help reduce subjectivity and increase accuracy. Detailed tools can also help identify appropriate and inappropriate behaviours, which is crucial for fair and accurate assessments (Sando et al.,2013).

It is essential to use standardized methods in evaluating participants. The text emphasizes the need for tools tested for validity and reliability and for establishing independent reliability when multiple evaluators are involved. It suggests that evaluations should be conducted at an appropriate level of fidelity to ensure that participant outcomes are achieved. Additionally, guidelines should be established for cueing participants and terminating scenarios, and evaluations should include self-assessment by the participant when required. Trained objective observers should conduct the evaluations (Sando et al.,2013).

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Faculty Simulation Toolkit Copyright © by Cynthia Hammond RN, BScN, MN(ACNP), Professor, Mohawk College, Hamilton, Ontario, Canada ; Melissa Knoops RN, BScN, MA, Professor, Mohawk College, Hamilton, Ontario, Canada ; Marie Morin RN, BScN, MN, CCSNE, Professor, Mohawk College, Hamilton, Ontario, Canada ; Mozhgan Peiravi RN, BScN, MScN, DNP, Professor, Mohawk College, Hamilton, Ontario, Canada ; John Pilla RN, BSc, MN, CCSNE, Professor, Mohawk College, Hamilton, Ontario, Canada ; Shelley Samwel RN, BSN, MN, PhD (c), Professor, Mohawk College, Hamilton, Ontario, Canada ; and Jennifer Stockdale RN, BScN, MScN Professor, Mohawk College, Hamilton, Ontario, Canada  is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.