Question Answering Evaluation Survey

L. Gillard, P. Bellot, M. El-Bèze


Abstract
Evaluating Question Answering (QA) Systems is a very complex task: state-of-the-art systems involve processing whose influences and contributions on the final result are not clear and need to be studied. We present some key points on different aspects of the QA Systems (QAS) evaluation: mainly, as performed during large-scale campaigns, but also with clues on the evaluation of QAS typical software components; the last part of this paper, is devoted to a brief presentation of the French QA campaign EQueR and presents two issues: inter-annotator agreement during campaign and the reuse of reference patterns.
Anthology ID:
L06-1312
Volume:
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)
Month:
May
Year:
2006
Address:
Genoa, Italy
Editors:
Nicoletta Calzolari, Khalid Choukri, Aldo Gangemi, Bente Maegaard, Joseph Mariani, Jan Odijk, Daniel Tapias
Venue:
LREC
SIG:
Publisher:
European Language Resources Association (ELRA)
Note:
Pages:
Language:
URL:
http://www.lrec-conf.org/proceedings/lrec2006/pdf/515_pdf.pdf
DOI:
Bibkey:
Cite (ACL):
L. Gillard, P. Bellot, and M. El-Bèze. 2006. Question Answering Evaluation Survey. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06), Genoa, Italy. European Language Resources Association (ELRA).
Cite (Informal):
Question Answering Evaluation Survey (Gillard et al., LREC 2006)
Copy Citation:
PDF:
http://www.lrec-conf.org/proceedings/lrec2006/pdf/515_pdf.pdf