Jordi Turmo

Also published as: J. Turmo


2018

pdf bib
Coreference Resolution in FreeLing 4.0
Montserrat Marimon | Lluís Padró | Jordi Turmo
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

2014

pdf bib
TweetNorm_es: an annotated corpus for Spanish microtext normalization
Iñaki Alegria | Nora Aranberri | Pere Comas | Víctor Fresno | Pablo Gamallo | Lluis Padró | Iñaki San Vicente | Jordi Turmo | Arkaitz Zubiaga
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

In this paper we introduce TweetNorm_es, an annotated corpus of tweets in Spanish language, which we make publicly available under the terms of the CC-BY license. This corpus is intended for development and testing of microtext normalization systems. It was created for Tweet-Norm, a tweet normalization workshop and shared task, and is the result of a joint annotation effort from different research groups. In this paper we describe the methodology defined to build the corpus as well as the guidelines followed in the annotation process. We also present a brief overview of the Tweet-Norm shared task, as the first evaluation environment where the corpus was used.

2013

pdf bib
A Constraint-Based Hypergraph Partitioning Approach to Coreference Resolution
Emili Sapena | Lluís Padró | Jordi Turmo
Computational Linguistics, Volume 39, Issue 4 - December 2013

pdf bib
UPC-CORE: What Can Machine Translation Evaluation Metrics and Wikipedia Do for Estimating Semantic Textual Similarity?
Alberto Barrón-Cedeño | Lluís Màrquez | Maria Fuentes | Horacio Rodríguez | Jordi Turmo
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity

2012

pdf bib
Summarizing a multimodal set of documents in a Smart Room
Maria Fuentes | Horacio Rodríguez | Jordi Turmo
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

This article reports an intrinsic automatic summarization evaluation in the scientific lecture domain. The lecture takes place in a Smart Room that has access to different types of documents produced from different media. An evaluation framework is presented to analyze the performance of systems producing summaries answering a user need. Several ROUGE metrics are used and a manual content responsiveness evaluation was carried out in order to analyze the performance of the evaluated approaches. Various multilingual summarization approaches are analyzed showing that the use of different types of documents outperforms the use of transcripts. In fact, not using any part of the spontaneous speech transcription in the summary improves the performance of automatic summaries. Moreover, the use of semantic information represented in the different textual documents coming from different media helps to improve summary quality.

2011

pdf bib
RelaxCor Participation in CoNLL Shared Task on Coreference Resolution
Emili Sapena | Lluís Padró | Jordi Turmo
Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task

2010

pdf bib
RelaxCor: A Global Relaxation Labeling Approach to Coreference Resolution
Emili Sapena | Lluís Padró | Jordi Turmo
Proceedings of the 5th International Workshop on Semantic Evaluation

pdf bib
A Global Relaxation Labeling Approach to Coreference Resolution
Emili Sapena | Lluís Padró | Jordi Turmo
Coling 2010: Posters

pdf bib
Evaluation Protocol and Tools for Question-Answering on Speech Transcripts
Nicolas Moreau | Olivier Hamon | Djamel Mostefa | Sophie Rosset | Olivier Galibert | Lori Lamel | Jordi Turmo | Pere R. Comas | Paolo Rosso | Davide Buscaldi | Khalid Choukri
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

Question Answering (QA) technology aims at providing relevant answers to natural language questions. Most Question Answering research has focused on mining document collections containing written texts to answer written questions. In addition to written sources, a large (and growing) amount of potentially interesting information appears in spoken documents, such as broadcast news, speeches, seminars, meetings or telephone conversations. The QAST track (Question-Answering on Speech Transcripts) was introduced in CLEF to investigate the problem of question answering in such audio documents. This paper describes in detail the evaluation protocol and tools designed and developed for the CLEF-QAST evaluation campaigns that have taken place between 2007 and 2009. We first remind the data, question sets, and submission procedures that were produced or set up during these three campaigns. As for the evaluation procedure, the interface that was developed to ease the assessors’ work is described. In addition, this paper introduces a methodology for a semi-automatic evaluation of QAST systems based on time slot comparisons. Finally, the QAST Evaluation Package 2007-2009 resulting from these evaluation campaigns is also introduced.

2009

pdf bib
An Analysis of Bootstrapping for the Recognition of Temporal Expressions
Jordi Poveda | Mihai Surdeanu | Jordi Turmo
Proceedings of the NAACL HLT 2009 Workshop on Semi-supervised Learning for Natural Language Processing

2008

pdf bib
Question Answering on Speech Transcriptions: the QAST evaluation in CLEF
Lori Lamel | Sophie Rosset | Christelle Ayache | Djamel Mostefa | Jordi Turmo | Pere Comas
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

This paper reports on the QAST track of CLEF aiming to evaluate Question Answering on Speech Transcriptions. Accessing information in spoken documents provides additional challenges to those of text-based QA, needing to address the characteristics of spoken language, as well as errors in the case of automatic transcriptions of spontaneous speech. The framework and results of the pilot QAst evaluation held as part of CLEF 2007 is described, illustrating some of the additional challenges posed by QA in spoken documents relative to written ones. The current plans for future multiple-language and multiple-task QAst evaluations are described.

2006

pdf bib
A Hybrid Approach for the Acquisition of Information Extraction Patterns
Mihai Surdeanu | Jordi Turmo | Alicia Ageno
Proceedings of the Workshop on Adaptive Text Extraction and Mining (ATEM 2006)

2005

pdf bib
Semantic Role Labeling Using Complete Syntactic Analysis
Mihai Surdeanu | Jordi Turmo
Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)

pdf bib
A Robust Combination Strategy for Semantic Role Labeling
Lluís Màrquez | Mihai Surdeanu | Pere Comas | Jordi Turmo
Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing

2004

pdf bib
Automatic Classification of Geographic Named Entities
Daniel Ferrés | Marc Massot | Muntsa Padró | Horacio Rodríguez | Jordi Turmo
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
Automatic Building Gazetteers of Co-referring Named Entities
Daniel Ferrés | Marc Massot | Muntsa Padró | Horacio Rodríguez | Jordi Turmo
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
Automatic Acquisition of Sense Examples Using ExRetriever
Juan Fernández | Mauro Castillo | German Rigau | Jordi Atserias | Jordi Turmo
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

2000

pdf bib
Learning IE Rules for a Set of Related Concepts
J. Turmo | H. Rodriguez
Fourth Conference on Computational Natural Language Learning and the Second Learning Language in Logic Workshop