The Script Concordance Test for clinical reasoning in paediatric medicine: Medical student performance and expert panel reliability

Authors

  • Anne Morris University of Sydney
  • Dianne E Campbell University of Sydney

DOI:

https://doi.org/10.11157/fohpe.v16i2.65

Keywords:

Script Concordance Test, student, clinical reasoning, modified essay question

Abstract

Background:

This study aimed to determine the correlation between student performance in clinical reasoning on the Script Concordance Test (SCT) and a modified essay question (MEQ) exam in a paediatric teaching block and to measure the intra-rater reliability of the expert scoring panel.

Method:

A 65-item assessment was developed using the accepted SCT method and scored against the responses of a panel of 10 general and subspecialty paediatricians. Student scores for the summative modified essay question examination at the end of the child and adolescent health block were compared with the score on the SCT. Intra-expert reliability was measured for the 10 paediatricians on the expert panel.

Results:

One hundred and two students completed both the SCT and the MEQ examination, with the correlation coefficient indicating moderate correlation (r = 0.46). The weighted Cohen kappa for the paediatricians on the panel ranged from 0.61–0.86, demonstrating good to excellent intra-rater agreement.

Conclusion:

We found that the MEQ is not a reliable means of measuring clinical reasoning of medical students, with only moderate correlation with the SCT, and that alternative methods such as SCT should be considered. Our finding of high reliability for paediatricians on the scoring panel is the first published using this methodology. It suggests that for lower stakes examinations, there is no need to re-test examiners. We do, however, propose that this simple method of assessing intra-rater reliability should be considered for high-stakes medical student examinations.

References

Carriere, B., Gagnon, R., Charlin, B., Dowling, S., & Bordage, G. (2009). Assessing clinical reasoning in pediatric emergency medicine: Validity evidence for a script concordance test. Annals of Emergency Medicine, 53, 647–652.

Charlin, B., Gagnon, R., Pelletier, J., Coletti, M., Abi-Rizk G., Nasr, C., . . . Van der Vleuten, C. (2006). Assessment of clinical reasoning in the context of uncertainty: The effect of variability within the reference panel. Medical Education, 40, 848–854.

Charlin, B., Roy, L., Brailovsky, C., Goulet, F., & van der Vleuten, C. (2000). The Script Concordance Test: A tool to assess the reflective clinician. Teaching and Learning in Medicine, 12(4), 189–195.

Dory, V., Gagnon, R., Vanpee, D., & Charlin, B. (2012). How to construct and implement script concordance tests: Insights from a systematic review. Medical Education, 46, 552–563.

Duggan, P., & Charlin, C. (2012). Summative assessment of 5th year medical school students’ clinical reasoning by script concordance test: Requirements and challenges. BMC Medical Education, 12, 29. doi: 10.1186/1472-6920-12-29.

Fournier, J. P., Demeester, A., & Charlin, B. (2008). Script concordance tests: Guidelines for construction. BMC Medical Informatics and Decision Making, 8, 18. doi: 10.1186/1472-6947-8-18

Gagnon, R., Charlin, B., Coletti, M., Sauve, E., & van der Vleuten, C. (2005). Assessment in the context of uncertainty: How many members are needed on the panel of reference of a script concordance test? Medical Education, 39, 284–291.

Gagnon, R., Charlin, B., Lambert, C., Carriere, B., & van der Vleuten, C. (2009). Script concordance testing: More cases or more questions? Advances in Health Science Education Theory and Practice, 14, 367–375.

Gagnon, R., Lubarsky, S., Lambert, C., & Charlin, B. (2011). Optimization of answer keys for script concordance testing: Should we exclude deviant panellists, deviant responses or neither? Advances in Health Sciences Education Theory and Practice, 16(5), 601–608.

Kelly, W., Durning, S., & Denton, G. (2012). Comparing a script concordance examination to a multiple-choice examination on a core internal medicine clerkship. Teaching and Learning in Medicine: An International Journal, 24(3), 187–193.

Landis, J. R., & Koch, G. G. (1977). An application of hierarchical kappa-type statistics in the assessment of majority agreement among multiple observers. Biometrics, 33, 363–374.

Lubarsky, S., Chalk, C., Kazitani, D., Gagnon, R., & Charlin, C. (2009). The script concordance test: A new tool for assessing clinical judgement in neurology. Canadian Journal of Neurological Sciences, 36, 326–331.

Lubarsky, S., Charlin, B., Cook, D. A., Chalk, C., & van der Vleuten, C. P. (2011). Script concordance testing: A review of published validity evidence. Medical Education, 45, 329–338.

Norman, G. (2005). Research in clinical reasoning: Past history and current trends. Medical Education, 39, 418–427.

Palmer, E., Duggan, P., Devitt, P., & Russell, R. (2010). The modified essay question: Its exit from the exit examination? Medical Teacher, 32, e300–e307.

Sibert, L., Darmoni, S. J., Dahamma, B., Hellot, F., Weber, J., & Charlin, B. (2006). On line clinical reasoning assessment with script concordance test in urology: Results of a French pilot study. BMC Medical Education, 6, 45. doi: 10.1186/1472-6920-6-45.

Downloads

Published

2015-04-09

How to Cite

Morris, A., & Campbell, D. E. (2015). The Script Concordance Test for clinical reasoning in paediatric medicine: Medical student performance and expert panel reliability. Focus on Health Professional Education: A Multi-Professional Journal, 16(2), 4–12. https://doi.org/10.11157/fohpe.v16i2.65

Issue

Section

Articles