Ramon Ziai, Niels Ott and Detmar Meurers
Proceedings of the 7th Workshop on Innovative Use of NLP for Building Educational Applications (BEA7). 2012..
A number of different research subfields are concerned with the automatic assessment of student answers to comprehension questions, from language learning contexts to computer science exams. They share the need to evaluate free-text answers but differ in task setting and grading/evaluation criteria, among others.
This paper has the intention of fostering synergy between the different research strands. It discusses the different research strands, details the crucial differences, and explores under which circumstances systems can be compared given publicly available data. To that end, we present results with the CoMiC-EN Content Assessment system (Meurers et al., 2011a) on the dataset published by Mohler et al. (2011) and outline what was necessary to perform this comparison. We conclude with a general discussion on comparability and evaluation of short answer assessment systems.
Electronically available file formats:
Bibtex entry:
@InProceedings{Ziai.Ott.Meurers-12,
author = {Ramon Ziai and Niels Ott and Detmar Meurers},
title = {Short Answer Assessment: Establishing Links Between
Research Strands},
booktitle = {Proceedings of the 7th Workshop on
Innovative Use of NLP for Building Educational Applications
(BEA7)},
year = {2012},
address = {Montreal, Canada},
publisher = {Association for Computational Linguistics},
pages = {190--200},
url = {http://purl.org/dm/papers/ziai-ott-meurers-12.html}
}