2012
pdf
bib
abs
Evaluating Machine Reading Systems through Comprehension Tests
Anselmo Peñas
|
Eduard Hovy
|
Pamela Forner
|
Álvaro Rodrigo
|
Richard Sutcliffe
|
Corina Forascu
|
Caroline Sporleder
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)
This paper describes a methodology for testing and evaluating the performance of Machine Reading systems through Question Answering and Reading Comprehension Tests. The methodology is being used in QA4MRE (QA for Machine Reading Evaluation), one of the labs of CLEF. The task was to answer a series of multiple choice tests, each based on a single document. This allows complex questions to be asked but makes evaluation simple and completely automatic. The evaluation architecture is completely multilingual: test documents, questions, and their answers are identical in all the supported languages. Background text collections are comparable collections harvested from the web for a set of predefined topics. Each test received an evaluation score between 0 and 1 using c@1. This measure encourages systems to reduce the number of incorrect answers while maintaining the number of correct ones by leaving some questions unanswered. 12 groups participated in the task, submitting 62 runs in 3 different languages (German, English, and Romanian). All runs were monolingual; no team attempted a cross-language task. We report here the conclusions and lessons learned after the first campaign in 2011.
2010
pdf
bib
Extending English ACE 2005 Corpus Annotation with Ground-truth Links to Wikipedia
Luisa Bentivogli
|
Pamela Forner
|
Claudio Giuliano
|
Alessandro Marchetti
|
Emanuele Pianta
|
Kateryna Tymoshenko
Proceedings of the 2nd Workshop on The People’s Web Meets NLP: Collaboratively Constructed Semantic Resources
pdf
bib
abs
GikiCLEF: Crosscultural Issues in Multilingual Information Access
Diana Santos
|
Luís Miguel Cabral
|
Corina Forascu
|
Pamela Forner
|
Fredric Gey
|
Katrin Lamm
|
Thomas Mandl
|
Petya Osenova
|
Anselmo Peñas
|
Álvaro Rodrigo
|
Julia Schulz
|
Yvonne Skalban
|
Erik Tjong Kim Sang
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)
In this paper we describe GikiCLEF, the first evaluation contest that, to our knowledge, was specifically designed to expose and investigate cultural and linguistic issues involved in structured multimedia collections and searching, and which was organized under the scope of CLEF 2009. GikiCLEF evaluated systems that answered hard questions for both human and machine, in ten different Wikipedia collections, namely Bulgarian, Dutch, English, German, Italian, Norwegian (Bokmäl and Nynorsk), Portuguese, Romanian, and Spanish. After a short historical introduction, we present the task, together with its motivation, and discuss how the topics were chosen. Then we provide another description from the point of view of the participants. Before disclosing their results, we introduce the SIGA management system explaining the several tasks which were carried out behind the scenes. We quantify in turn the GIRA resource, offered to the community for training and further evaluating systems with the help of the 50 topics gathered and the solutions identified. We end the paper with a critical discussion of what was learned, advancing possible ways to reuse the data.
pdf
bib
abs
Evaluating Multilingual Question Answering Systems at CLEF
Pamela Forner
|
Danilo Giampiccolo
|
Bernardo Magnini
|
Anselmo Peñas
|
Álvaro Rodrigo
|
Richard Sutcliffe
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)
The paper offers an overview of the key issues raised during the seven years activity of the Multilingual Question Answering Track at the Cross Language Evaluation Forum (CLEF). The general aim of the Multilingual Question Answering Track has been to test both monolingual and cross-language Question Answering (QA) systems that process queries and documents in several European languages, also drawing attention to a number of challenging issues for research in multilingual QA. The paper gives a brief description of how the task has evolved over the years and of the way in which the data sets have been created, presenting also a brief summary of the different types of questions developed. The document collections adopted in the competitions are sketched as well, and some data about the participation are provided. Moreover, the main evaluation measures used to evaluate system performances are explained and an overall analysis of the results achieved is presented.
2004
pdf
bib
Revising the Wordnet Domains Hierarchy: semantics, coverage and balancing
Luisa Bentivogli
|
Pamela Forner
|
Bernardo Magnini
|
Emanuele Pianta
Proceedings of the Workshop on Multilingual Linguistic Resources
pdf
bib
Evaluating Cross-Language Annotation Transfer in the MultiSemCor Corpus
Luisa Bentivogli
|
Pamela Forner
|
Emanuele Pianta
COLING 2004: Proceedings of the 20th International Conference on Computational Linguistics