Open Access Open Access  Restricted Access Subscription Access

Can Australian medical students’ predictions of peers’ responses assist with gaining reliable results on course evaluations?

Stephen Bacchi, Brad Guo, Stuart Brown, Ian Symonds, Judith (Nicky) Hudson

Abstract


Introduction: Student feedback is integral to continuous improvement of medical programmes. A key challenge with student course evaluations is gaining large enough response rates for results to be reliable. This study investigated whether student predictions of peer, rather than personal, responses could address this challenge. 

Method: An anonymous paper-based student experience of learning and teaching (SELT) survey was distributed to the Year 1–3 medical student cohorts. Students responded to 20 survey statements, using a 6-option Likert-type scale. Ten statements evaluated students’ personal perspectives of the course, while the other 10 statements asked students to predict the most common response by their year cohort. Mean scores between the individual opinion-based and prediction-based statements were compared. An iterative process involving random subsampling was conducted to enable calculation of the minimum required number of responses for a stable outcome for each statement. 

Results: Two hundred and fifty-nine students participated (response rate 81.7%). Three out of the 10 paired statements in the prediction-based survey accurately predicted the group opinion-based mean. For the remaining seven statement pairs, there were statistically significant (although small) differences in mean. The calculation of mean number of responses required for a stable outcome found that the prediction-based SELT required significantly fewer (189) responses than the opinion-based SELT (215) (95% CI 15.3–35.7, p < 0.001).

Conclusions: A prediction-based style of course evaluation using a 6-option Likert-type scale approximated the results gained when asking for individual opinion and required fewer responses to achieve a stable outcome.


Keywords


course evaluation; medical student; survey methods; response rate

Full Text:

PDF

References


Abrahams, M., & Friedman, C. (1996). Preclinical course-evaluation methods at U.S. and Canadian medical schools. Academic Medicine, 71(4), 371–374.

Al Kuwaiti, A., AlQuraan, M., Subbarayalu, A. V., & Piro, J. S. (2016). Understanding the effect of response rate and class size interaction on students evaluation of teaching in a higher education. Cogent Education, 3(1). doi:10.1080/ 2331186x.2016.1204082

Carifio, J., & Perla, R. (2008). Resolving the 50-year debate around using and misusing Likert scales. Medical Education, 42(12), 1150–1152. doi:10.1111/ j.1365-2923.2008.03172.x

Cohen-Schotanus, J., Schonrock-Adema, J., & Schmidt, H. G. (2010). Quality of courses evaluated by "predictions" rather than opinions: Fewer respondents needed for similar results. Medical Teacher, 32(10), 851–856. doi:10.3109/01421591003697465

Crews, T. B., & Curtis, D. F. (2011). Online course evaluations: Faculty perspective and strategies for improved response rates. Assessment & Evaluation in Higher Education, 36(7), 865–878. doi:10.1080/02602938.2010.493970

Dolmans, D., Kamp, R., Stalmeijer, R., Whittingham, J., & Wolfhagen, I. (2014). Biases in course evaluations: "What does the evidence say?". Medical Education, 48(2), 219–220. doi:10.1111/medu.12297

Fleming, P., Heath, O., Goodridge, A., & Curran, V. (2015). Making medical student course evaluations meaningful: Implementation of an intensive course review protocol. BMC Medical Education, 15, 99. doi:10.1186/s12909-015-0387-1

Goodman, J., Anson, R., & Belcheir, M. (2014). The effect of incentives and other instructor-driven strategies to increase online student evaluation response rates. Assessment & Evaluation in Higher Education, 40(7), 958–970. doi:10.1080/02602 938.2014.960364

Grava-Gubins, I., & Scott, S. (2008). Effects of various methodologic strategies: Survey response rates among Canadian physicians and physicians-in-training. Canadian Family Physician, 54(10), 1424–1430.

Guder, F., & Malliaris, M. (2013). Online course evaluations response rates. American Journal of Business Education, 6(3), 333–338.

Hofstee, W., & Schaapman, H. (1990). Bets beat polls: Averaged predictions of election outcomes. Acta Politica, 25, 257–270.

Kogan, J. R., & Shea, J. A. (2007). Course evaluation in medical education. Teaching and Teacher Education, 23(3), 251–264. doi:10.1016/j.tate.2006.12.020

Malone, M. G., Carney, M. M., House, J. B., Cranford, J. A., & Santen, S. A. (2018). Tit-for-tat strategy for increasing medical student evaluation response rates. Western Journal of Emergency Medicine, 19(1), 75–79. doi:10.5811/ westjem.2017.9.35320

Parker, K. (2013). A better hammer in a better toolbox: Considerations for the future of programme evaluation. Medical Education, 47(5), 440–442. doi:10.1111/ medu.12185

Porter, S., Whitcomb, M., & Weitzer, W. (2004). Multiple surveys of students and survey fatigue. New Direction for Institutional Research, 121, 63–73.

Schonrock-Adema, J., Lubarsky, S., Chalk, C., Steinert, Y., & Cohen-Schotanus, J. (2013). "What would my classmates say?" An international study of the prediction-based method of course evaluation. Medical Education, 47(5), 453– 462. doi:10.1111/medu.12126

Sullivan, G., & Artino, A. (2013). Analyzing and interpreting data from Likert-type scales. Journal of Graduate Medical Education, 5(4), 541–542. doi:0.4300/JGME- 5-4-18

Wadgave, U., & Khairnar, M. R. (2016). Parametric tests for Likert scale: For and against. Asian Journal of Psychiatry, 24, 67–68. doi:10.1016/j.ajp.2016.08.016




DOI: http://dx.doi.org/10.11157/fohpe.v19i2.250

Refbacks

  • There are currently no refbacks.