Evolution or revolution to programmatic assessment: Considering unintended consequences of assessment change

Authors

  • Anna Ryan Department of Medical Education, Melbourne Medical School, University of Melbourne, Australia
  • Deborah O'Mara University of Sydney Medical School, Faculty of Medicine and Health, Australia
  • Mike Tweed Department of Medicine, University of Otago Wellington, New Zealand

DOI:

https://doi.org/10.11157/fohpe.v24i2.703

Keywords:

medical, health professions, education, assessment, design, programmatic

Abstract

Assessment in the health professions is transforming. The widespread dominance of a reductionist measurement-based approach over the past 50 years is shifting towards a preference for more authentic assessment designed to promote and support learning. Assessment as a series of individual barriers, each to be surmounted, is being discarded in favour of systems of assessment designed to scaffold learner development and ensure sufficient opportunities for achievement. The intentions of these changes are to avoid the negative impacts of previous assessment approaches, such as strategic gaming, unhealthy competition and a predominance of book study, over immersion in clinical environments. However, unintended outcomes need to be considered when planning such transformative assessment change—both for those engaged in incremental evolutionary change and for those taking a more rapid or revolutionary approach. We explore three key features of programmatic assessment: longitudinal use of multiple assessment formats, a focus on assessment for learning and collation of data by attribute for decision making. We highlight the intended and possible unintended outcomes related to these features from the perspective of evolutionary and revolutionary approaches to change. We postulate that careful consideration of unintended outcomes is essential when planning significant assessment redesigns in health professional education. Anticipating unintended outcomes might also provide both the motivation and rationale to advance assessment practice into the next 50 years—particularly in the areas of enhancements in technology and collaborations across and between education providers.

References

Albon, R. (2001). Examine them until they pass! Agenda: A Journal of Policy Analysis and Reform, 8(2), 185–191. https://www.jstor.org/stable/43199162

Bandiera, G., Sherbino, J., & Frank, J. R. (Eds.). (2006). The CanMEDS assessment tools handbook: An introductory guide to assessment methods for the CanMEDS competencies. The Royal College of Physicians and Surgeons of Canada.

Bierer, S. B., Dannefer, E. F., & Tetzlaff, J. E. (2015). Time to loosen the apron strings: Cohort-based evaluation of a learner-driven remediation model at one medical school. Journal of General Internal Medicine, 30(9), 1339–1343. https://doi.org/10.1007/s11606-015-3343-1

Bok, H. G., Teunissen, P. W., Favier, R. P., Rietbroek, N. J., Theyse, L. F., Brommer, H., Haarhuis, J. C. M., van Beukelen, P., van der Vleuten, C. P., M., & Jaarsma, D. A. (2013). Programmatic assessment of competency-based workplace learning: When theory meets practice. BMC Medical Education, 13(1), Article 123. https://doi.org/10.1186/1472-6920-13-123

de Jong, L. H., Bok, H. G., Schellekens, L. H., Kremer, W. D., Jonker, F. H., & van der Vleuten, C. P. (2022). Shaping the right conditions in programmatic assessment: How quality of narrative information affects the quality of high-stakes decision-making. BMC Medical Education, 22(1), Article 409. https://doi.org/10.1186/s12909-022-03257-2

Glover, C., & Brown, E. (2006). Written feedback for students: Too much, too detailed or too incomprehensible to be effective? Bioscience Education, 7(1), 1–16. https://doi.org/10.3108/beej.2006.07000004

Graham, I. S., Gleason, A. J., Keogh, G. W., Paltridge, D., Rogers, I. R., Walton, M., De Paola, C., Singh, J., & McGrath, B. P. (2007). Australian curriculum framework for junior doctors. Medical Journal of Australia, 186(S7), S14–S19. https://doi.org/10.5694/j.1326-5377.2007.tb00959.x

Hadjar, A., & Becker, R. (2009). Educational expansion: Expected and unexpected consequences. In A. Hadjar & R. Becker (Eds.), Expected and unexpected consequences of the educational expansion in Europe and the US (pp. 9–23). Haupt Berne.

Heeneman, S., Oudkerk Pool, A., Schuwirth, L. W., van der Vleuten, C. P., & Driessen, E. W. (2015). The impact of programmatic assessment on student learning: Theory versus practice. Medical Education, 49(5), 487–498. https://doi.org/10.1111/medu.12645

Hoang, N. S., & Lau, J. N. (2018). A call for mixed methods in competency-based medical education: How we can prevent the overfitting of curriculum and assessment. Academic Medicine, 93(7), 996–1001. https://doi.org/10.1097/ACM.0000000000002205

Kang, S. P., Chen, Y., Svihla, V., Gallup, A., Ferris, K., & Datye, A. K. (2022). Guiding change in higher education: An emergent, iterative application of Kotter’s change model. Studies in Higher Education, 47(2), 270–289. https://doi.org/10.1080/03075079.2020.1741540

Lyons, P. (2017, December 26). Peter Lyons: NCEA teaching our kids they don't need to try too hard. NZ Herald. https://www.nzherald.co.nz/nz/peter-lyons-ncea-teaching-our-kids-they-dont-need-to-try-too-hard/R63AFSZ6MK346OCPGWJXIO47CE/

Mak‐van der Vossen, M. (2019). “Failure to fail”: The teacher's dilemma revisited. Medical Education, 53(2), 108–110. https://doi.org/10.1111/medu.13772

Medical Council of New Zealand. (2014, February). New Zealand curriculum framework for prevocational medical training. https://www.mcnz.org.nz/assets/Forms/4b7ce95390/New-Zealand-Curriculum-Framework.pdf

Merton R. K. (1936). The unanticipated consequences of purposive social action. American Sociological Review, 1(6), 894–904. https://doi.org/10.2307/2084615

Norcini, J., Anderson, M. B., Bollela, V., Burch, V., Costa, M. J., Duvivier, R., Hays, R., Palacios Mackay, M. F., Roberts, T., & Swanson, D. (2018). 2018 consensus framework for good assessment. Medical Teacher, 40(11), 1102–1109. https://doi.org/10.1080/0142159X.2018.1500016

O’Rourke, M., Hammond, S., O’Flynn, S., & Boylan, G. (2010). The medical student stress profile: A tool for stress audit in medical training. Medical Education, 44(10), 1027–1037. https://doi.org/10.1111/j.1365-2923.2010.03734.x

Pawson, R., Wong, G., & Owen, L. (2011). Known knowns, known unknowns, unknown unknowns: The predicament of evidence-based policy. American Journal of Evaluation, 32(4), 518–546. https://doi.org/10.1177/1098214011403831

Pearce, J., Reid, K., Chiavaroli, N., & Hyam, D. (2021). Incorporating aspects of programmatic assessment into examinations: Aggregating rich information to inform decision-making. Medical Teacher, 43(5), 567–574. https://doi.org/10.1080/0142159X.2021.1878122

Pearce, J., & Tavares, W. (2021). A philosophical history of programmatic assessment: Tracing shifting configurations. Advances in Health Sciences Education, 26(4), 1291–1310. https://doi.org/10.1007/s10459-021-10050-1

Pugh, K. J., & Zhao, Y. (2003). Stories of teacher alienation: A look at the unintended consequences of efforts to empower teachers. Teaching and Teacher Education, 19(2), 187–201. https://doi.org/10.1016/S0742-051X(02)00103-8

Richardson, M. (2022). Rebuilding public confidence in educational assessment. UCL Press.

Roberts, C., Khanna, P., Bleasel, J., Lane, S., Burgess, A., Charles, K., Howard, R., O’Mara, D., Haq, I., & Rutzou, T. (2022). Student perspectives on programmatic assessment in a large medical programme: A critical realist analysis. Medical Education, 56(9), 901–914. https://doi.org/10.1111/medu.14807

Ross, S., Hauer, K. E., Wycliffe-Jones, K., Hall, A. K., Molgaard, L., Richardson, D., Oswald, A., Bhanji, F., & ICBME Collaborators. (2021). Key considerations in planning and designing programmatic assessment in competency-based medical education. Medical Teacher, 43(7), 758–764. https://doi.org/10.1080/0142159X.2021.1925099

Ryan, A., & Judd, T. (2022). From traditional to programmatic assessment in three (not so) easy steps. Education Sciences, 12(7), 487. https://doi.org/10.3390/educsci12070487

Schut, S., Driessen, E., Van Tartwijk, J., van der Vleuten, C., & Heeneman, S. (2018). Stakes in the eye of the beholder: An international study of learners’ perceptions within programmatic assessment. Medical Education, 52(6), 654–663. https://doi.org/10.1111/medu.13532

Schut, S., Heeneman, S., Bierer, B., Driessen, E., van Tartwijk, J., & van der Vleuten, C. (2020). Between trust and control: Teachers' assessment conceptualisations within programmatic assessment. Medical Education, 54(6), 528–537. https://doi.org/10.1111/medu.14075

Schut, S., Maggio, L. A., Heeneman, S., van Tartwijk, J., van der Vleuten, C., & Driessen, E. (2021). Where the rubber meets the road: An integrative review of programmatic assessment in health care professions education. Perspectives on Medical Education, 10(1), 6–13. https://doi.org/10.1007/s40037-020-00625-w

Schuwirth, L. W., & van der Vleuten, C. P. (2011). Programmatic assessment: From assessment of learning to assessment for learning. Medical Teacher, 33(6), 478–485. https://doi.org/10.3109/0142159X.2011.565828

Schuwirth, L. W., & van Der Vleuten, C. P. (2019). Current assessment in medical education: Programmatic assessment. Journal of Applied Testing Technology, 20(S2), 2–10.

Schuwirth, L. W., & van der Vleuten, C. P. (2020). A history of assessment in medical education. Advances in Health Sciences Education, 25(5), 1045–1056. https://doi.org/10.1007/s10459-020-10003-0

Schuwirth, L. W., van der Vleuten, C., & Durning, S. J. (2017). What programmatic assessment in medical education can learn from healthcare. Perspectives on Medical Education, 6(4), 211–215. https://doi.org/10.1007/s40037-017-0345-1

Tait, G. R., & Kulasegaram, K. M. (2022). Assessment for learning: The University of Toronto Temerty Faculty of Medicine MD program experience. Education Sciences, 12(4), 249. https://doi.org/10.3390/educsci12040249

Torre, D., Rice, N. E., Ryan, A., Bok, H., Dawson, L. J., Bierer, B., Wilkinson, T. J., Tait, G. R., Laughlin, T., Veerapen, K., Heeneman, S., Freeman, A., & van der Vleuten, C. (2021). Ottawa 2020 consensus statements for programmatic assessment—2. Implementation and practice. Medical Teacher, 43(10), 1149–1160. https://doi.org/10.1080/0142159X.2021.1956681

Tweed, M. J., Thompson-Fawcett, M., & Wilkinson, T. J. (2013). Decision-making bias in assessment: The effect of aggregating objective information and anecdote. Medical Teacher, 35(10), 832–837. https://doi.org/10.3109/0142159X.2013.803062

Tweed, M., & Wilkinson, T. (2019). Student progress decision-making in programmatic assessment: Can we extrapolate from clinical decision-making and jury decision-making? BMC Medical Education, 19(1), Article 176. https://doi.org/10.1186/s12909-019-1583-1

van der Vleuten, C. P., & Schuwirth, L. W. (2005). Assessing professional competence: From methods to programmes. Medical Education, 39(3), 309–317. https://doi.org/10.1111/j.1365-2929.2005.02094.x

Wilkinson, T. J., & Tweed, M. J. (2018). Deconstructing programmatic assessment. Advances in Medical Education and Practice, 2018(9), 191–197. http://doi.org/10.2147/AMEP.S144449

Wise, S. L., & DeMars, C. E. (2005). Low examinee effort in low-stakes assessment: Problems and potential solutions. Educational Assessment, 10(1), 1–17. https://doi.org/10.1207/s15326977ea1001_1

Downloads

Published

2023-07-07

How to Cite

Ryan, A., O'Mara, D., & Tweed, M. (2023). Evolution or revolution to programmatic assessment: Considering unintended consequences of assessment change. Focus on Health Professional Education: A Multi-Professional Journal, 24(2), 185–195. https://doi.org/10.11157/fohpe.v24i2.703

Issue

Section

ANZAHPE 50th Anniversary Collection