Large-scale interview and simulation-based assessments such as objective structured clinical examinations (OSCEs) and multiple mini interviews (MMIs) are logistically complex to administer, generate large volumes of assessment data, and are strong candidates for the adoption of computer-based marking systems. Adoption of new technologies can be challenging, and technical failures, which are relatively commonplace, can delay and/or create resistance to ongoing implementation.
This paper reports on the adoption process of an electronic marking system for OSCEs and MMIs following an unsuccessful initial trial. It describes how, after the initial setback, a staged implementation, progressing from small to larger-scale assessments, single to multiple assessment types, and lower to higher stakes assessments, was used to successfully adopt and embed iPad-based marking within our medical school.
Critical factors in the success of this approach included thorough appraisal and selection of technologies, rigorous assurance of system reliability and security, constant review and refinement, and careful attention to implementation and end-user training. Engagement of stakeholders is also crucial, especially in the case of previous failures or setbacks.