How do we know that we are assessing the “right things”?

Authors

  • Joy Rudland University of Otago
  • Cameron Lacey University of Otago
  • Kristin Kenrick University of Otago
  • Mike Tweed University of Otago

DOI:

https://doi.org/10.11157/fohpe.v18i1.209

Keywords:

assessment, medical education, blueprinting, subject representation

Abstract

 Introduction: As assessment is perceived as a powerful tool for learning, it is important to ensure that assessments reflect the learning considered most significant. Assessment blueprinting offers the opportunity to ensure perceptions of what should be assessed align with what is assessed. 

Methods: An expert panel was asked to determine the percentage of undergraduate assessment that should be devoted to broad domain areas previously agreed to define the outcomes of an undergraduate curriculum; this provided the blueprint. Staff who co-ordinated, implemented and assessed students on clinical runs indicated the percentage of their assessments allocated to the domain areas. Staff who constructed end-of-year summative assessments also analysed their assessments in terms of the domain areas. The “expert panel” blueprint values were then compared using Mann–Whitney U with the actual assessment conducted to determine variations between the ideal and actual assessment. 

Results: What was considered important to assess closely aligned with what was assessed in most domain areas. The exceptions were the underassessment of two domains, Māori Health and Research and Information Literacy.

Conclusions: The chosen methodology identified areas that were under-represented in the actual student assessments. This prompted the school to consider whether this under-representation is problematic; if so, whether to redistribute or increase assessment, and whether the required increases should occur in-course or at the end of the year.

References

Bridge, P. D., Musial, J., Frank, R., Toe, T., & Sawilowsky, S. (2003). Measurement practices: Methods for developing content-valid student examinations. Medical Teacher, 25, 414–421.

Downing, S. M. (2003). Validity: On the meaningful interpretation of assessment data. Medical Education, 37, 830–837.

Hamby, H. (2006). Blueprinting for the assessment of health care professionals. The Clinical Teacher, 3, 175–179.

Hayes, R. (2008). Assessment in medical education: Roles for clinical teachers. The Clinical Teacher, 5, 23–27.

McLachlan, J. C. (2006). The relationship between assessment and learning. Medical Education, 40, 716–777.

Rexwinkel, T., Haenen, J., & Pilot, A. (2013). Quality assurance in higher education: Analysis of grades for reviewing course levels. Quality and Quantity, 47, 581–598.

Ware, J., & Vik, T. (2009). Quality assurance of item writing: During the introduction of multiple choice questions in medicine for high stakes examinations. Medical Teacher, 31, 238–243.

Downloads

Published

2017-04-28

How to Cite

Rudland, J., Lacey, C., Kenrick, K., & Tweed, M. (2017). How do we know that we are assessing the “right things”?. Focus on Health Professional Education: A Multi-Professional Journal, 18(1), 80–87. https://doi.org/10.11157/fohpe.v18i1.209

Issue

Section

Articles