Temporal Information Annotation: Crowd vs. Experts

Tommaso Caselli, Rachele Sprugnoli, Oana Inel


Abstract
This paper describes two sets of crowdsourcing experiments on temporal information annotation conducted on two languages, i.e., English and Italian. The first experiment, launched on the CrowdFlower platform, was aimed at classifying temporal relations given target entities. The second one, relying on the CrowdTruth metric, consisted in two subtasks: one devoted to the recognition of events and temporal expressions and one to the detection and classification of temporal relations. The outcomes of the experiments suggest a valuable use of crowdsourcing annotations also for a complex task like Temporal Processing.
Anthology ID:
L16-1557
Volume:
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
Month:
May
Year:
2016
Address:
Portorož, Slovenia
Editors:
Nicoletta Calzolari, Khalid Choukri, Thierry Declerck, Sara Goggi, Marko Grobelnik, Bente Maegaard, Joseph Mariani, Helene Mazo, Asuncion Moreno, Jan Odijk, Stelios Piperidis
Venue:
LREC
SIG:
Publisher:
European Language Resources Association (ELRA)
Note:
Pages:
3502–3509
Language:
URL:
https://aclanthology.org/L16-1557
DOI:
Bibkey:
Cite (ACL):
Tommaso Caselli, Rachele Sprugnoli, and Oana Inel. 2016. Temporal Information Annotation: Crowd vs. Experts. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 3502–3509, Portorož, Slovenia. European Language Resources Association (ELRA).
Cite (Informal):
Temporal Information Annotation: Crowd vs. Experts (Caselli et al., LREC 2016)
Copy Citation:
PDF:
https://aclanthology.org/L16-1557.pdf