Improving the scoring of Serious Educational Games as Assessments

The demand for alternative forms of testing is growing. The 21st Century Initiative highlighted a growing desire from industry to introduce modern workplace skills in the school curriculum. Complex or procedural problem solving, systems management and collaboration are in high demand in the workplace. However, until these skills can be tested, their impact on education is likely to be minimal. Serious Educational Games (SEGs) provide a format to carry out these kinds of complex tasks, but we do not yet know how to score gaming data fairly. The way games are currently scored has been influenced by games industry practices which do not have to meet the same levels of accountability for accuracy as formal testing. The data produced during gameplay is unlike the kinds of data that assessors are used to analyzing. Without a convincing argument for the technical adequacy of the test from an assessment perspective, we cannot
make a serious case for their widespread use in education. This paper identifies key issues around the conceptualization of missingness, time and iteration affecting the scoring through a case study of an educational gaming data set. It also provides an initial estimation of the fairness of the game scores.

Key words: Assessment Big Data Serious Educational Games Scoring Validation


  • Version
  • 32 Download
  • 432.49 KB File Size
  • 1 File Count
  • November 16, 2018 Create Date
  • November 16, 2018 Last Updated

Improving the scoring of Serious Educational Games as Assessments.pdfDownload