Benchmarking Textual Annotation Tools for the Semantic Web

Diana Maynard


Abstract
This paper investigates the state of the art in automatic textual annotation tools, and examines the extent to which they are ready for use in the real world. We define some benchmarking criteria for measuring the usability of annotation tools, and examine those factors which are particularly important for a real user to be able to determine which is the most suitable tool for their use. We discuss factors such as usability, accessibility, interoperability and scalability, and evaluate a set of annotation tools according to these factors. Finally, we draw some conclusions about the current state of research in annotation and make some suggestions for the future.
Anthology ID:
L08-1342
Volume:
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)
Month:
May
Year:
2008
Address:
Marrakech, Morocco
Editors:
Nicoletta Calzolari, Khalid Choukri, Bente Maegaard, Joseph Mariani, Jan Odijk, Stelios Piperidis, Daniel Tapias
Venue:
LREC
SIG:
Publisher:
European Language Resources Association (ELRA)
Note:
Pages:
Language:
URL:
http://www.lrec-conf.org/proceedings/lrec2008/pdf/11_paper.pdf
DOI:
Bibkey:
Cite (ACL):
Diana Maynard. 2008. Benchmarking Textual Annotation Tools for the Semantic Web. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08), Marrakech, Morocco. European Language Resources Association (ELRA).
Cite (Informal):
Benchmarking Textual Annotation Tools for the Semantic Web (Maynard, LREC 2008)
Copy Citation:
PDF:
http://www.lrec-conf.org/proceedings/lrec2008/pdf/11_paper.pdf