Eneldo Loza Mencía

Also published as: Eneldo Loza Mencía


pdf bib
A Data Set for the Analysis of Text Quality Dimensions in Summarization Evaluation
Margot Mieskes | Eneldo Loza Mencía | Tim Kronsbein
Proceedings of the Twelfth Language Resources and Evaluation Conference

Automatic evaluation of summarization focuses on developing a metric to represent the quality of the resulting text. However, text qualityis represented in a variety of dimensions ranging from grammaticality to readability and coherence. In our work, we analyze the depen-dencies between a variety of quality dimensions on automatically created multi-document summaries and which dimensions automaticevaluation metrics such as ROUGE, PEAK or JSD are able to capture. Our results indicate that variants of ROUGE are correlated tovarious quality dimensions and that some automatic summarization methods achieve higher quality summaries than others with respectto individual summary quality dimensions. Our results also indicate that differentiating between quality dimensions facilitates inspectionand fine-grained comparison of summarization methods and its characteristics. We make the data from our two summarization qualityevaluation experiments publicly available in order to facilitate the future development of specialized automatic evaluation methods.


pdf bib
Which Scores to Predict in Sentence Regression for Text Summarization?
Markus Zopf | Eneldo Loza Mencía | Johannes Fürnkranz
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)

The task of automatic text summarization is to generate a short text that summarizes the most important information in a given set of documents. Sentence regression is an emerging branch in automatic text summarizations. Its key idea is to estimate the importance of information via learned utility scores for individual sentences. These scores are then used for selecting sentences from the source documents, typically according to a greedy selection strategy. Recently proposed state-of-the-art models learn to predict ROUGE recall scores of individual sentences, which seems reasonable since the final summaries are evaluated according to ROUGE recall. In this paper, we show in extensive experiments that following this intuition leads to suboptimal results and that learning to predict ROUGE precision scores leads to better results. The crucial difference is to aim not at covering as much information as possible but at wasting as little space as possible in every greedy step.


pdf bib
Sequential Clustering and Contextual Importance Measures for Incremental Update Summarization
Markus Zopf | Eneldo Loza Mencía | Johannes Fürnkranz
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

Unexpected events such as accidents, natural disasters and terrorist attacks represent an information situation where it is crucial to give users access to important and non-redundant information as early as possible. Incremental update summarization (IUS) aims at summarizing events which develop over time. In this paper, we propose a combination of sequential clustering and contextual importance measures to identify important sentences in a stream of documents in a timely manner. Sequential clustering is used to cluster similar sentences. The created clusters are scored by a contextual importance measure which identifies important information as well as redundant information. Experiments on the TREC Temporal Summarization 2015 shared task dataset show that our system achieves superior results compared to the best participating systems.

pdf bib
Medical Concept Embeddings via Labeled Background Corpora
Eneldo Loza Mencía | Gerard de Melo | Jinseok Nam
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

In recent years, we have seen an increasing amount of interest in low-dimensional vector representations of words. Among other things, these facilitate computing word similarity and relatedness scores. The most well-known example of algorithms to produce representations of this sort are the word2vec approaches. In this paper, we investigate a new model to induce such vector spaces for medical concepts, based on a joint objective that exploits not only word co-occurrences but also manually labeled documents, as available from sources such as PubMed. Our extensive experimental analysis shows that our embeddings lead to significantly higher correlations with human similarity and relatedness assessments than previous work. Due to the simplicity and versatility of vector representations, these findings suggest that our resource can easily be used as a drop-in replacement to improve any systems relying on medical concept similarity measures.

pdf bib
Beyond Centrality and Structural Features: Learning Information Importance for Text Summarization
Markus Zopf | Eneldo Loza Mencía | Johannes Fürnkranz
Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning


pdf bib
Simultaneous Feature Selection and Parameter Optimization Using Multi-objective Optimization for Sentiment Analysis
Mohammed Arif Khan | Asif Ekbal | Eneldo Loza Mencía
Proceedings of the 12th International Conference on Natural Language Processing