A Framework for the Qualitative Analysis of Examinee Responses to Improve Marking Reliability and Item and Mark Scheme Validity

A Framework for the Qualitative Analysis of Examinee Responses to Improve Marking Reliability and Item and Mark Scheme Validity

[featured_image]
  • Version
  • Download 104
  • File Size 696.15 KB
  • File Count 1
  • Create Date August 2, 2018
  • Last Updated August 2, 2018

A Framework for the Qualitative Analysis of Examinee Responses to Improve Marking Reliability and Item and Mark Scheme Validity

The predominant view of validity is based on the Messick (1989) argument that validity is not a property of the test, but rather a property of the meaning of the test scores. Some researchers caution that this view risks ignoring the vital role of question and mark scheme developers in ensuring that assessments are valid. Pollitt et al. (2008), for example, propose a different conception of validity, which requires that the cognitive processes elicited by the question are those intended by the question writer. Validity in this sense can only be investigated through a thorough exploration of the student thinking that is triggered by a question.Regardless of the conception of validity employed, a test cannot be valid if it does not produce scores that are consistent and relatively free from error. Reliability is therefore a necessary condition for validity, and marking reliability represents the greatest threat to the reliability of many assessments that use constructed-response items.This paper presents a framework for using qualitative evidence captured from item responses to improve item and mark scheme validity and marking reliability in constructed-response items.[Key words: validity, marking reliability, construct irrelevant variance]

Attached Files

FileAction
paper_5b942ed.pdfDownload