Núria Bel

Also published as: Nuria Bel


2024

pdf bib
TEMA: Token Embeddings Mapping for Enriching Low-Resource Language Models
Rodolfo Zevallos | Núria Bel | Mireia Farrús
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

The objective of the research we present is to remedy the problem of the low quality of language models for low-resource languages. We introduce an algorithm, the Token Embedding Mapping Algorithm (TEMA), that maps the token embeddings of a richly pre-trained model L1 to a poorly trained model L2, thus creating a richer L2’ model. Our experiments show that the L2’ model reduces perplexity with respect to the original monolingual model L2, and that for downstream tasks, including SuperGLUE, the results are state-of-the-art or better for the most semantic tasks. The models obtained with TEMA are also competitive or better than multilingual or extended models proposed as solutions for mitigating the low-resource language problems.

pdf bib
EsCoLA: Spanish Corpus of Linguistic Acceptability
Nuria Bel | Marta Punsola | Valle Ruíz-Fernández
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Acceptability is one of the General Language Understanding Evaluation Benchmark (GLUE) probing tasks proposed to assess the linguistic capabilities acquired by a deep-learning transformer-based language model (LM). In this paper, we introduce the Spanish Corpus of Linguistic Acceptability EsCoLA. EsCoLA has been developed following the example of other linguistic acceptability data sets for English, Italian, Norwegian or Russian, with the aim of having a complete GLUE benchmark for Spanish. EsCoLA consists of 11,174 sentences and their acceptability judgements as found in well-known Spanish reference grammars. Additionally, all sentences have been annotated with the class of linguistic phenomenon the sentence is an example of, also following previous practices. We also provide as task baselines the results of fine-tuning four different language models with this data set and the results of a human annotation experiment. Results are also analyzed and commented to guide future research. EsCoLA is released under a CC-BY 4.0 license and freely available at https://doi.org/10.34810/data1138.

2023

pdf bib
Hints on the data for language modeling of synthetic languages with transformers
Rodolfo Zevallos | Nuria Bel
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Language Models (LM) are becoming more and more useful for providing representations upon which to train Natural Language Processing applications. However, there is now clear evidence that attention-based transformers require a critical amount of language data to produce good enough LMs. The question we have addressed in this paper is to what extent the critical amount of data varies for languages of different morphological typology, in particular those that have a rich inflectional morphology, and whether the tokenization method to preprocess the data can make a difference. These details can be important for low-resourced languages that need to plan the production of datasets. We evaluated intrinsically and extrinsically the differences of five different languages with different pretraining dataset sizes and three different tokenization methods for each. The results confirm that the size of the vocabulary due to morphological characteristics is directly correlated with both the LM perplexity and the performance of two typical downstream tasks such as NER identification and POS labeling. The experiments also provide new evidence that a canonical tokenizer can reduce perplexity by more than a half for a polysynthetic language like Quechua as well as raising F1 from 0.8 to more than 0.9 in both downstream tasks with a LM trained with only 6M tokens.

pdf bib
Frequency Balanced Datasets Lead to Better Language Models
Rodolfo Zevallos | Mireia Farrús | Núria Bel
Findings of the Association for Computational Linguistics: EMNLP 2023

This paper reports on the experiments aimed to improve our understanding of the role of the amount of data required for training attention-based transformer language models. Specifically, we investigate the impact of reducing the immense amounts of required pre-training data through sampling strategies that identify and reduce high-frequency tokens as different studies have indicated that the existence of very high-frequency tokens in pre-training data might bias learning, causing undesired effects. In this light, we describe our sampling algorithm that iteratively assesses token frequencies and removes sentences that contain still high-frequency tokens, eventually delivering a balanced, linguistically correct dataset. We evaluate the results in terms of model perplexity and fine-tuning linguistic probing tasks, NLP downstream tasks as well as more semantic SuperGlue tasks. The results show that pre-training with the resulting balanced dataset allows reducing up to three times the pre-training data.

2022

pdf bib
Introducing QuBERT: A Large Monolingual Corpus and BERT Model for Southern Quechua
Rodolfo Zevallos | John Ortega | William Chen | Richard Castro | Núria Bel | Cesar Yoshikawa | Renzo Venturas | Hilario Aradiel | Nelsi Melgarejo
Proceedings of the Third Workshop on Deep Learning for Low-Resource Natural Language Processing

The lack of resources for languages in the Americas has proven to be a problem for the creation of digital systems such as machine translation, search engines, chat bots, and more. The scarceness of digital resources for a language causes a higher impact on populations where the language is spoken by millions of people. We introduce the first official large combined corpus for deep learning of an indigenous South American low-resource language spoken by millions called Quechua. Specifically, our curated corpus is created from text gathered from the southern region of Peru where a dialect of Quechua is spoken that has not traditionally been used for digital systems as a target dialect in the past. In order to make our work repeatable by others, we also offer a public, pre-trained, BERT model called QuBERT which is the largest linguistic model ever trained for any Quechua type, not just the southern region dialect. We furthermore test our corpus and its corresponding BERT model on two major tasks: (1) named-entity recognition (NER) and (2) part-of-speech (POS) tagging by using state-of-the-art techniques where we achieve results comparable to other work on higher-resource languages. In this article, we describe the methodology, challenges, and results from the creation of QuBERT which is on par with other state-of-the-art multilingual models for natural language processing achieving between 71 and 74% F1 score on NER and 84–87% on POS tasks.

2020

pdf bib
Proceedings of the 28th International Conference on Computational Linguistics
Donia Scott | Nuria Bel | Chengqing Zong
Proceedings of the 28th International Conference on Computational Linguistics

pdf bib
The European Language Technology Landscape in 2020: Language-Centric and Human-Centric AI for Cross-Cultural Communication in Multilingual Europe
Georg Rehm | Katrin Marheinecke | Stefanie Hegele | Stelios Piperidis | Kalina Bontcheva | Jan Hajič | Khalid Choukri | Andrejs Vasiļjevs | Gerhard Backfried | Christoph Prinz | José Manuel Gómez-Pérez | Luc Meertens | Paul Lukowicz | Josef van Genabith | Andrea Lösch | Philipp Slusallek | Morten Irgens | Patrick Gatellier | Joachim Köhler | Laure Le Bars | Dimitra Anastasiou | Albina Auksoriūtė | Núria Bel | António Branco | Gerhard Budin | Walter Daelemans | Koenraad De Smedt | Radovan Garabík | Maria Gavriilidou | Dagmar Gromann | Svetla Koeva | Simon Krek | Cvetana Krstev | Krister Lindén | Bernardo Magnini | Jan Odijk | Maciej Ogrodniczuk | Eiríkur Rögnvaldsson | Mike Rosner | Bolette Pedersen | Inguna Skadiņa | Marko Tadić | Dan Tufiș | Tamás Váradi | Kadri Vider | Andy Way | François Yvon
Proceedings of the Twelfth Language Resources and Evaluation Conference

Multilingualism is a cultural cornerstone of Europe and firmly anchored in the European treaties including full language equality. However, language barriers impacting business, cross-lingual and cross-cultural communication are still omnipresent. Language Technologies (LTs) are a powerful means to break down these barriers. While the last decade has seen various initiatives that created a multitude of approaches and technologies tailored to Europe’s specific needs, there is still an immense level of fragmentation. At the same time, AI has become an increasingly important concept in the European Information and Communication Technology area. For a few years now, AI – including many opportunities, synergies but also misconceptions – has been overshadowing every other topic. We present an overview of the European LT landscape, describing funding programmes, activities, actions and challenges in the different countries with regard to LT, including the current state of play in industry and the LT market. We present a brief overview of the main LT-related activities on the EU level in the last ten years and develop strategic guidance with regard to four key dimensions.

2018

pdf bib
Can Domain Adaptation be Handled as Analogies?
Núria Bel | Joel Pocostales
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

2017

pdf bib
Annotation of negation in the IULA Spanish Clinical Record Corpus
Montserrat Marimon | Jorge Vivaldi | Núria Bel
Proceedings of the Workshop Computational Semantics Beyond Events and Roles

This paper presents the IULA Spanish Clinical Record Corpus, a corpus of 3,194 sentences extracted from anonymized clinical records and manually annotated with negation markers and their scope. The corpus was conceived as a resource to support clinical text-mining systems, but it is also a useful resource for other Natural Language Processing systems handling clinical texts: automatic encoding of clinical records, diagnosis support, term extraction, among others, as well as for the study of clinical texts. The corpus is publicly available with a CC-BY-SA 3.0 license.

2016

pdf bib
SemEval-2016 Task 5: Aspect Based Sentiment Analysis
Maria Pontiki | Dimitris Galanis | Haris Papageorgiou | Ion Androutsopoulos | Suresh Manandhar | Mohammad AL-Smadi | Mahmoud Al-Ayyoub | Yanyan Zhao | Bing Qin | Orphée De Clercq | Véronique Hoste | Marianna Apidianaki | Xavier Tannier | Natalia Loukachevitch | Evgeniy Kotelnikov | Nuria Bel | Salud María Jiménez-Zafra | Gülşen Eryiğit
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)

pdf bib
CobaltF: A Fluent Metric for MT Evaluation
Marina Fomicheva | Núria Bel | Lucia Specia | Iria da Cunha | Anton Malinovskiy
Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers

pdf bib
Leveraging RDF Graphs for Crossing Multiple Bilingual Dictionaries
Marta Villegas | Maite Melero | Núria Bel | Jorge Gracia
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

The experiments presented here exploit the properties of the Apertium RDF Graph, principally cycle density and nodes’ degree, to automatically generate new translation relations between words, and therefore to enrich existing bilingual dictionaries with new entries. Currently, the Apertium RDF Graph includes data from 22 Apertium bilingual dictionaries and constitutes a large unified array of linked lexical entries and translations that are available and accessible on the Web (http://linguistic.linkeddata.es/apertium/). In particular, its graph structure allows for interesting exploitation opportunities, some of which are addressed in this paper. Two ‘massive’ experiments are reported: in the first one, the original EN-ES translation set was removed from the Apertium RDF Graph and a new EN-ES version was generated. The results were compared against the previously removed EN-ES data and against the Concise Oxford Spanish Dictionary. In the second experiment, a new non-existent EN-FR translation set was generated. In this case the results were compared against a converted wiktionary English-French file. The results we got are really good and perform well for the extreme case of correlated polysemy. This lead us to address the possibility to use cycles and nodes degree to identify potential oddities in the source data. If cycle density proves efficient when considering potential targets, we can assume that in dense graphs nodes with low degree may indicate potential errors.

pdf bib
Towards producing bilingual lexica from monolingual corpora
Jingyi Han | Núria Bel
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

Bilingual lexica are the basis for many cross-lingual natural language processing tasks. Recent works have shown success in learning bilingual dictionary by taking advantages of comparable corpora and a diverse set of signals derived from monolingual corpora. In the present work, we describe an approach to automatically learn bilingual lexica by training a supervised classifier using word embedding-based vectors of only a few hundred translation equivalent word pairs. The word embedding representations of translation pairs were obtained from source and target monolingual corpora, which are not necessarily related. Our classifier is able to predict whether a new word pair is under a translation relation or not. We tested it on two quite distinct language pairs Chinese-Spanish and English-Spanish. The classifiers achieved more than 0.90 precision and recall for both language pairs in different evaluation scenarios. These results show a high potential for this method to be used in bilingual lexica production for language pairs with reduced amount of parallel or comparable corpora, in particular for phrase table expansion in Statistical Machine Translation systems.

pdf bib
Using Contextual Information for Machine Translation Evaluation
Marina Fomicheva | Núria Bel
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

Automatic evaluation of Machine Translation (MT) is typically approached by measuring similarity between the candidate MT and a human reference translation. An important limitation of existing evaluation systems is that they are unable to distinguish candidate-reference differences that arise due to acceptable linguistic variation from the differences induced by MT errors. In this paper we present a new metric, UPF-Cobalt, that addresses this issue by taking into consideration the syntactic contexts of candidate and reference words. The metric applies a penalty when the words are similar but the contexts in which they occur are not equivalent. In this way, Machine Translations (MTs) that are different from the human translation but still essentially correct are distinguished from those that share high number of words with the reference but alter the meaning of the sentence due to translation errors. The results show that the method proposed is indeed beneficial for automatic MT evaluation. We report experiments based on two different evaluation tasks with various types of manual quality assessment. The metric significantly outperforms state-of-the-art evaluation systems in varying evaluation settings.

pdf bib
Assessing the Potential of Metaphoricity of verbs using corpus data
Marco Del Tredici | Núria Bel
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

The paper investigates the relation between metaphoricity and distributional characteristics of verbs, introducing POM, a corpus-derived index that can be used to define the upper bound of metaphoricity of any expression in which a given verb occurs. The work moves from the observation that while some verbs can be used to create highly metaphoric expressions, others can not. We conjecture that this fact is related to the number of contexts in which a verb occurs and to the frequency of each context. This intuition is modelled by introducing a method in which each context of a verb in a corpus is assigned a vector representation, and a clustering algorithm is employed to identify similar contexts. Eventually, the Standard Deviation of the relative frequency values of the clusters is computed and taken as the POM of the target verb. We tested POM in two experimental settings obtaining values of accuracy of 84% and 92%. Since we are convinced, along with (Shutoff, 2015), that metaphor detection systems should be concerned only with the identification of highly metaphoric expressions, we believe that POM could be profitably employed by these systems to a priori exclude expressions that, due to the verb they include, can only have low degrees of metaphoricity

2015

pdf bib
Reading Between the Lines: Overcoming Data Sparsity for Accurate Classification of Lexical Relationships
Silvia Necşulescu | Sara Mendes | David Jurgens | Núria Bel | Roberto Navigli
Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics

pdf bib
A Word-Embedding-based Sense Index for Regular Polysemy Representation
Marco Del Tredici | Núria Bel
Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing

pdf bib
UPF-Cobalt Submission to WMT15 Metrics Task
Marina Fomicheva | Núria Bel | Iria da Cunha | Anton Malinovskiy
Proceedings of the Tenth Workshop on Statistical Machine Translation

2014

pdf bib
Boosting the creation of a treebank
Blanca Arias | Núria Bel | Mercè Lorente | Montserrat Marimón | Alba Milà | Jorge Vivaldi | Muntsa Padró | Marina Fomicheva | Imanol Larrea
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

In this paper we present the results of an ongoing experiment of bootstrapping a Treebank for Catalan by using a Dependency Parser trained with Spanish sentences. In order to save time and cost, our approach was to profit from the typological similarities between Catalan and Spanish to create a first Catalan data set quickly by automatically: (i) annotating with a de-lexicalized Spanish parser, (ii) manually correcting the parses, and (iii) using the Catalan corrected sentences to train a Catalan parser. The results showed that the number of parsed sentences required to train a Catalan parser is about 1000 that were achieved in 4 months, with 2 annotators.

pdf bib
The IULA Spanish LSP Treebank
Montserrat Marimon | Núria Bel | Beatriz Fisas | Blanca Arias | Silvia Vázquez | Jorge Vivaldi | Carlos Morell | Mercè Lorente
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

This paper presents the IULA Spanish LSP Treebank, a dependency treebank of over 41,000 sentences of different domains (Law, Economy, Computing Science, Environment, and Medicine), developed in the framework of the European project METANET4U. Dependency annotations in the treebank were automatically derived from manually selected parses produced by an HPSG-grammar by a deterministic conversion algorithm that used the identifiers of grammar rules to identify the heads, the dependents, and some dependency types that were directly transferred onto the dependency structure (e.g., subject, specifier, and modifier), and the identifiers of the lexical entries to identify the argument-related dependency functions (e.g. direct object, indirect object, and oblique complement). The treebank is accessible with a browser that provides concordance-based search functions and delivers the results in two formats: (i) a column-based format, in the style of CoNLL-2006 shared task, and (ii) a dependency graph, where dependency relations are noted by an oriented arrow which goes from the dependent node to the head node. The IULA Spanish LSP Treebank is the first technical corpus of Spanish annotated at surface syntactic level following the dependency grammar theory. The treebank has been made publicly and freely available from the META-SHARE platform with a Creative Commons CC-by licence.

pdf bib
The Strategic Impact of META-NET on the Regional, National and International Level
Georg Rehm | Hans Uszkoreit | Sophia Ananiadou | Núria Bel | Audronė Bielevičienė | Lars Borin | António Branco | Gerhard Budin | Nicoletta Calzolari | Walter Daelemans | Radovan Garabík | Marko Grobelnik | Carmen García-Mateo | Josef van Genabith | Jan Hajič | Inma Hernáez | John Judge | Svetla Koeva | Simon Krek | Cvetana Krstev | Krister Lindén | Bernardo Magnini | Joseph Mariani | John McNaught | Maite Melero | Monica Monachini | Asunción Moreno | Jan Odijk | Maciej Ogrodniczuk | Piotr Pęzik | Stelios Piperidis | Adam Przepiórkowski | Eiríkur Rögnvaldsson | Michael Rosner | Bolette Pedersen | Inguna Skadiņa | Koenraad De Smedt | Marko Tadić | Paul Thompson | Dan Tufiş | Tamás Váradi | Andrejs Vasiļjevs | Kadri Vider | Jolanta Zabarskaite
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

This article provides an overview of the dissemination work carried out in META-NET from 2010 until early 2014; we describe its impact on the regional, national and international level, mainly with regard to politics and the situation of funding for LT topics. This paper documents the initiative’s work throughout Europe in order to boost progress and innovation in our field.

pdf bib
CLARA: A New Generation of Researchers in Common Language Resources and Their Applications
Koenraad De Smedt | Erhard Hinrichs | Detmar Meurers | Inguna Skadiņa | Bolette Pedersen | Costanza Navarretta | Núria Bel | Krister Lindén | Markéta Lopatková | Jan Hajič | Gisle Andersen | Przemyslaw Lenkiewicz
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

CLARA (Common Language Resources and Their Applications) is a Marie Curie Initial Training Network which ran from 2009 until 2014 with the aim of providing researcher training in crucial areas related to language resources and infrastructure. The scope of the project was broad and included infrastructure design, lexical semantic modeling, domain modeling, multimedia and multimodal communication, applications, and parsing technologies and grammar models. An international consortium of 9 partners and 12 associate partners employed researchers in 19 new positions and organized a training program consisting of 10 thematic courses and summer/winter schools. The project has resulted in new theoretical insights as well as new resources and tools. Most importantly, the project has trained a new generation of researchers who can perform advanced research and development in language resources and technologies.

pdf bib
A cascade approach for complex-type classification
Lauren Romeo | Sara Mendes | Núria Bel
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

The work detailed in this paper describes a 2-step cascade approach for the classification of complex-type nominals. We describe an experiment that demonstrates how a cascade approach performs when the task consists in distinguishing nominals from a given complex-type from any other noun in the language. Overall, our classifier successfully identifies very specific and not highly frequent lexical items such as complex-types with high accuracy, and distinguishes them from those instances that are not complex types by using lexico-syntactic patterns indicative of the semantic classes corresponding to each of the individual sense components of the complex type. Although there is still room for improvement with regard to the coverage of the classifiers developed, the cascade approach increases the precision of classification of the complex-type nouns that are covered in the experiment presented.

pdf bib
Choosing which to use? A study of distributional models for nominal lexical semantic classification
Lauren Romeo | Gianluca Lebani | Núria Bel | Alessandro Lenci
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

This paper empirically evaluates the performances of different state-of-the-art distributional models in a nominal lexical semantic classification task. We consider models that exploit various types of distributional features, which thereby provide different representations of nominal behavior in context. The experiments presented in this work demonstrate the advantages and disadvantages of each model considered. This analysis also considers a combined strategy that we found to be capable of leveraging the bottlenecks of each model, especially when large robust data is not available.

pdf bib
Metadata as Linked Open Data: mapping disparate XML metadata registries into one RDF/OWL registry.
Marta Villegas | Maite Melero | Núria Bel
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

The proliferation of different metadata schemas and models pose serious problems of interoperability. Maintaining isolated repositories with overlapping data is costly in terms of time and effort. In this paper, we describe how we have achieved a Linked Open Data version of metadata descriptions coming from heterogeneous sources, originally encoded in XML. The resulting model is much simpler than the original XSD schema and avoids problems typical of XML syntax, such as semantic ambiguity and order constraint. Moreover, the open world assumption of RDF/OWL allows to naturally integrate objects from different schemas and to add further extensions, facilitating merging of different models as well as linking to external data. Apart from the advantages in terms of interoperability and maintainability, the merged repository enables end-users to query multiple sources using a unified schema and is able to present them with implicit knowledge derived from the linked data. The approach we present here is easily scalable to any number of sources and schemas.

pdf bib
Ranking Job Offers for Candidates: learning hidden knowledge from Big Data
Marc Poch | Núria Bel | Sergio Espeja | Felipe Navío
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

This paper presents a system for suggesting a ranked list of appropriate vacancy descriptions to job seekers in a job board web site. In particular our work has explored the use of supervised classifiers with the objective of learning implicit relations which cannot be found with similarity or pattern based search methods that rely only on explicit information. Skills, names of professions and degrees, among other examples, are expressed in different languages, showing high variation and the use of ad-hoc resources to trace the relations is very costly. This implicit information is unveiled when a candidate applies for a job and therefore it is information that can be used for learning a model to predict new cases. The results of our experiments, which combine different clustering, classification and ranking methods, show the validity of the approach.

pdf bib
Combining dependency information and generalization in a pattern-based approach to the classification of lexical-semantic relation instances
Silvia Necşulescu | Sara Mendes | Núria Bel
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

This work addresses the classification of word pairs as instances of lexical-semantic relations. The classification is approached by leveraging patterns of co-occurrence contexts from corpus data. The significance of using dependency information, of augmenting the set of dependency paths provided to the system, and of generalizing patterns using part-of-speech information for the classification of lexical-semantic relation instances is analyzed. Results show that dependency information is decisive to achieve better results both in precision and recall, while generalizing features based on dependency information by replacing lexical forms with their part-of-speech increases the coverage of classification systems. Our experiments also make apparent that approaches based on the context where word pairs co-occur are upper-bound-limited by the times these appear in the same sentence. Therefore strategies to use information across sentence boundaries are necessary.

pdf bib
Using unmarked contexts in nominal lexical semantic classification
Lauren Romeo | Sara Mendes | Núria Bel
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers

pdf bib
Squibs: Automatic Selection of HPSG-Parsed Sentences for Treebank Construction
Montserrat Marimon | Núria Bel | Lluís Padró
Computational Linguistics, Volume 40, Issue 3 - September 2014

2013

pdf bib
Annotation of regular polysemy and underspecification
Héctor Martínez Alonso | Bolette Sandford Pedersen | Núria Bel
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf bib
Towards the automatic classification of complex-type nominals
Lauren Romeo | Sara Mendes | Núria Bel
Proceedings of the 6th International Conference on Generative Approaches to the Lexicon (GL2013)

pdf bib
Class-based Word Sense Induction for dot-type nominals
Lauren Romeo | Héctor Martínez Alonso | Núria Bel
Proceedings of the 6th International Conference on Generative Approaches to the Lexicon (GL2013)

pdf bib
MosesCore: Moses Open Source Evaluation and Support Co-ordination for OutReach and Exploitation PANACEA: Platform for Automatic, Normalised Annotation and Cost-Effective Acquisition of Language Resources for Human Language Technologies
Nuria Bel | Marc Poch | Antonio Toral
Proceedings of Machine Translation Summit XIV: European projects

pdf bib
PANACEA: Platform for Automatic, Normalised Annotation and Cost-Effective Acquisition of Language Resources for Human Language Technologies
Nuria Bel | Marc Poch | Antonio Toral
Proceedings of Machine Translation Summit XIV: European projects

2012

pdf bib
Webservices for Bayesian Learning
Muntsa Padró | Núria Bel
Proceedings of the Workshop on Computational Models of Language Acquisition and Loss

pdf bib
Using Qualia Information to Identify Lexical Semantic Classes in an Unsupervised Clustering Task
Lauren Romeo | Sara Mendes | Núria Bel
Proceedings of COLING 2012: Posters

pdf bib
Automatic Extraction of Polar Adjectives for the Creation of Polarity Lexicons
Silvia Vázquez | Muntsa Padró | Núria Bel | Julio Gonzalo
Proceedings of COLING 2012: Posters

pdf bib
A Classification of Adjectives for Polarity Lexicons Enhancement
Silvia Vázquez | Núria Bel
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

Subjective language detection is one of the most important challenges in Sentiment Analysis. Because of the weight and frequency in opinionated texts, adjectives are considered a key piece in the opinion extraction process. These subjective units are more and more frequently collected in polarity lexicons in which they appear annotated with their prior polarity. However, at the moment, any polarity lexicon takes into account prior polarity variations across domains. This paper proves that a majority of adjectives change their prior polarity value depending on the domain. We propose a distinction between domain dependent and domain independent adjectives. Moreover, our analysis led us to propose a further classification related to subjectivity degree: constant, mixed and highly subjective adjectives. Following this classification, polarity values will be a better support for Sentiment Analysis.

pdf bib
A voting scheme to detect semantic underspecification
Héctor Martínez Alonso | Núria Bel | Bolette Sandford Pedersen
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

The following work describes a voting system to automatically classify the sense selection of the complex types Location/Organization and Container/Content, which depend on regular polysemy, as described by the Generative Lexicon (Pustejovsky, 1995) . This kind of sense alternations very often presents semantic underspecificacion between its two possible selected senses. This kind of underspecification is not traditionally contemplated in word sense disambiguation systems, as disambiguation systems are still coping with the need of a representation and recognition of underspecification (Pustejovsky, 2009) The data are characterized by the morphosyntactic and lexical enviroment of the headwords and provided as input for a classifier. The baseline decision tree classifier is compared against an eight-member voting scheme obtained from variants of the training data generated by modifications on the class representation and from two different classification algorithms, namely decision trees and k-nearest neighbors. The voting system improves the accuracy for the non-underspecified senses, but the underspecified sense remains difficult to identify

pdf bib
Iula2Standoff: a tool for creating standoff documents for the IULACT
Carlos Morell | Jorge Vivaldi | Núria Bel
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

Due to the increase in the number and depth of analyses required over the text, like entity recognition, POS tagging, syntactic analysis, etc. the annotation in-line has become unpractical. In Natural Language Processing (NLP) some emphasis has been placed in finding an annotation method to solve this problem. A possibility is the standoff annotation. With this annotation style it is possible to add new levels of annotation without disturbing exiting ones, with minimal knock on effects. This annotation will increase the possibility of adding more linguistic information as well as more possibilities for sharing textual resources. In this paper we present a tool developed in the framework of the European Metanet4u (Enhancing the European Linguistic Infrastructure, GA 270893) for creating a multi-layered XML annotation scheme, based on the GrAF proposal for standoff annotations.

pdf bib
Automatic lexical semantic classification of nouns
Núria Bel | Lauren Romeo | Muntsa Padró
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

The work we present here addresses cue-based noun classification in English and Spanish. Its main objective is to automatically acquire lexical semantic information by classifying nouns into previously known noun lexical classes. This is achieved by using particular aspects of linguistic contexts as cues that identify a specific lexical class. Here we concentrate on the task of identifying such cues and the theoretical background that allows for an assessment of the complexity of the task. The results show that, despite of the a-priori complexity of the task, cue-based classification is a useful tool in the automatic acquisition of lexical semantic classes.

pdf bib
The IULA Treebank
Montserrat Marimon | Beatriz Fisas | Núria Bel | Marta Villegas | Jorge Vivaldi | Sergi Torner | Mercè Lorente | Silvia Vázquez | Marta Villegas
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

This paper describes on-going work for the construction of a new treebank for Spanish, The IULA Treebank. This new resource will contain about 60,000 richly annotated sentences as an extension of the already existing IULA Technical Corpus which is only PoS tagged. In this paper we have focused on describing the work done for defining the annotation process and the treebank design principles. We report on how the used framework, the DELPH-IN processing framework, has been crucial in the design principles and in the bootstrapping strategy followed, especially in what refers to the use of stochastic modules for reducing parsing overgeneration. We also report on the different evaluation experiments carried out to guarantee the quality of the already available results.

pdf bib
Towards a User-Friendly Platform for Building Language Resources based on Web Services
Marc Poch | Antonio Toral | Olivier Hamon | Valeria Quochi | Núria Bel
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

This paper presents the platform developed in the PANACEA project, a distributed factory that automates the stages involved in the acquisition, production, updating and maintenance of Language Resources required by Machine Translation and other Language Technologies. We adopt a set of tools that have been successfully used in the Bioinformatics field, they are adapted to the needs of our field and used to deploy web services, which can be combined to build more complex processing chains (workflows). This paper describes the platform and its different components (web services, registry, workflows, social network and interoperability). We demonstrate the scalability of the platform by carrying out a set of massive data experiments. Finally, a validation of the platform across a set of required criteria proves its usability for different types of users (non-technical users and providers).

pdf bib
Using Language Resources in Humanities research
Marta Villegas | Nuria Bel | Carlos Gonzalo | Amparo Moreno | Nuria Simelio
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

In this paper we present two real cases, in the fields of discourse analysis of newspapers and communication research which demonstrate the impact of Language Resources (LR) and NLP in the humanities. We describe our collaboration with (i) the Feminario research group from the UAB which has been investigating androcentric practices in Spanish general press since the 80s and whose research suggests that Spanish general press has undergone a dehumanization process that excludes women and men and (ii) the “Municipals'11 online” project which investigates the Spanish local election campaign in the blogosphere. We will see how NLP tools and LRs make possible the so called ‘e-Humanities research' as they provide Humanities with tools to perform intensive and automatic text analyses. Language technologies have evolved a lot and are mature enough to provide useful tools to researchers dealing with large amount of textual data. The language resources that have been developed within the field of NLP have proven to be useful for other disciplines that are unaware of their existence and nevertheless would greatly benefit from them as they provide (i) exhaustiveness -to guarantee that data coverage is wide and representative enough- and (ii) reliable and significant results -to guarantee that the reported results are statistically significant.

pdf bib
The FLaReNet Strategic Language Resource Agenda
Claudia Soria | Núria Bel | Khalid Choukri | Joseph Mariani | Monica Monachini | Jan Odijk | Stelios Piperidis | Valeria Quochi | Nicoletta Calzolari
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

The FLaReNet Strategic Agenda highlights the most pressing needs for the sector of Language Resources and Technologies and presents a set of recommendations for its development and progress in Europe, as issued from a three-year consultation of the FLaReNet European project. The FLaReNet recommendations are organised around nine dimensions: a) documentation b) interoperability c) availability, sharing and distribution d) coverage, quality and adequacy e) sustainability f) recognition g) development h) infrastructure and i) international cooperation. As such, they cover a broad range of topics and activities, spanning over production and use of language resources, licensing, maintenance and preservation issues, infrastructures for language resources, resource identification and sharing, evaluation and validation, interoperability and policy issues. The intended recipients belong to a large set of players and stakeholders in Language Resources and Technology, ranging from individuals to research and education institutions, to policy-makers, funding agencies, SMEs and large companies, service and media providers. The main goal of these recommendations is to serve as an instrument to support stakeholders in planning for and addressing the urgencies of the Language Resources and Technologies of the future.

pdf bib
Language Resources Factory: case study on the acquisition of Translation Memories
Marc Poch | Antonio Toral | Núria Bel
Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics

2011

bib
Traitement Automatique des Langues, Volume 52, Numéro 3 : Ressources linguistiques libres [Free Language Resources]
Nuria Bel | Benoît Sagot
Traitement Automatique des Langues, Volume 52, Numéro 3 : Ressources linguistiques libres [Free Language Resources]

pdf bib
Introduction - Ressources linguistiques libres [Foreword - Free Language Resources]
Nuria Bel | Benoît Sagot
Traitement Automatique des Langues, Volume 52, Numéro 3 : Ressources linguistiques libres [Free Language Resources]

pdf bib
A Method Towards the Fully Automatic Merging of Lexical Resources
Núria Bel | Muntsa Padró | Silvia Necsulescu
Proceedings of the Workshop on Language Resources, Technology and Services in the Sharing Paradigm

pdf bib
Interoperability and Technology for a Language Resources Factory
Marc Poch | Núria Bel
Proceedings of the Workshop on Language Resources, Technology and Services in the Sharing Paradigm

pdf bib
Identification of sense selection in regular polysemy using shallow features
Héctor Martínez Alonso | Núria Bel | Bolette Sandford Pedersen
Proceedings of the 18th Nordic Conference of Computational Linguistics (NODALIDA 2011)

pdf bib
Towards the Automatic Merging of Lexical Resources: Automatic Mapping
Muntsa Padró | Núria Bel | Silvia Necsulescu
Proceedings of the International Conference Recent Advances in Natural Language Processing 2011

2010

pdf bib
Automatic Detection of Non-deverbal Event Nouns for Quick Lexicon Production
Nuria Bel | Maria Coll | Gabriela Resnik
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

pdf bib
Handling of Missing Values in Lexical Acquisition
Núria Bel
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

In this work we propose a strategy to reduce the impact of the sparse data problem in the tasks of lexical information acquisition based on the observation of linguistic cues. We propose a way to handle the uncertainty created by missing values, that is, when a zero value could mean either that the cue has not been observed because the word in question does not belong to the class, i.e. negative evidence, or that the word in question has just not been observed in the context sought by chance, i.e. lack of evidence. This uncertainty creates problems to the learner, because zero values for incompatible labelled examples make the cue lose its predictive capacity and even though some samples display the sought context, it is not taken into account. In this paper we present the results of our experiments to try to reduce this uncertainty by, as other authors do (Joanis et al. 2007, for instance), substituting zero values for pre-processed estimates. Here we present a first round of experiments that have been the basis for the estimates of linguistic information motivated by lexical classes. We obtained experimental results that show a clear benefit of the proposed approach.

pdf bib
A Case Study on Interoperability for Language Resources and Applications
Marta Villegas | Núria Bel | Santiago Bel | Víctor Rodríguez
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

This paper reports our experience when integrating differ resources and services into a grid environment. The use case we address implies the deployment of several NLP applications as web services. The ultimate objective of this task was to create a scenario where researchers have access to a variety of services they can operate. These services should be easy to invoke and able to interoperate between one another. We essentially describe the interoperability problems we faced, which involve metadata interoperability, data interoperability and service interoperability. We devote special attention to service interoperability and explore the possibility to define common interfaces and semantic description of services. While the web services paradigm suits the integration of different services very well, this requires mutual understanding and the accommodation to common interfaces that not only provide technical solution but also ease the user‟s work. Defining common interfaces benefits interoperability but requires the agreement about operations and the set of inputs/outputs. Semantic annotation allows defining some sort of taxonomy that organizes and collects the set of admissible operations and types input/output parameters.

pdf bib
Resource and Service Centres as the Backbone for a Sustainable Service Infrastructure
Peter Wittenburg | Nuria Bel | Lars Borin | Gerhard Budin | Nicoletta Calzolari | Eva Hajicova | Kimmo Koskenniemi | Lothar Lemnitzer | Bente Maegaard | Maciej Piasecki | Jean-Marie Pierrel | Stelios Piperidis | Inguna Skadina | Dan Tufis | Remco van Veenendaal | Tamas Váradi | Martin Wynne
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

Currently, research infrastructures are being designed and established in many disciplines since they all suffer from an enormous fragmentation of their resources and tools. In the domain of language resources and tools the CLARIN initiative has been funded since 2008 to overcome many of the integration and interoperability hurdles. CLARIN can build on knowledge and work from many projects that were carried out during the last years and wants to build stable and robust services that can be used by researchers. Here service centres will play an important role that have the potential of being persistent and that adhere to criteria as they have been established by CLARIN. In the last year of the so-called preparatory phase these centres are currently developing four use cases that can demonstrate how the various pillars CLARIN has been working on can be integrated. All four use cases fulfil the criteria of being cross-national.

2009

pdf bib
Proceedings of the Workshop on Adaptation of Language Resources and Technology to New Domains
Núria Bel | Erhard Hinrichs | Petya Osenova | Kiril Simov
Proceedings of the Workshop on Adaptation of Language Resources and Technology to New Domains

2008

pdf bib
Automatic Acquisition for low frequency lexical items
Núria Bel | Sergio Espeja | Montserrat Marimon
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

This paper addresses a specific case of the task of lexical acquisition understood as the induction of information about the linguistic characteristics of lexical items on the basis of information gathered from their occurrences in texts. Most of the recent works in the area of lexical acquisition have used methods that take as much textual data as possible as source of evidence, but their performance decreases notably when only few occurrences of a word are available. The importance of covering such low frequency items lies in the fact that a large quantity of the words in any particular collection of texts will be occurring few times, if not just once. Our work proposes to compensate the lack of information resorting to linguistic knowledge on the characteristics of lexical classes. This knowledge, obtained from a lexical typology, is formulated probabilistically to be used in a Bayesian method to maximize the information gathered from single occurrences as to predict the full set of characteristics of the word. Our results show that our method achieves better results than others for the treatment of low frequency items.

pdf bib
COLDIC, a Lexicographic Platform for LMF compliant lexica
Núria Bel | Sergio Espeja | Montserrat Marimon | Marta Villegas
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

Despite of the importance of lexical resources for a number of NLP applications (Machine Translation, Information Extraction, Question Answering, among others), there has been a traditional lack of generic tools for the creation, maintenance and management of computational lexica. The most direct obstacle for the development of generic tools, independent of any particular application format, was the lack of standards for the description and encoding of lexical resources. The availability of the Lexical Markup Framework (LMF) has changed this scenario and has made it possible the development of generic lexical platforms. COLDIC is a generic platform for working with computational lexica. The system has been designed to let the user concentrate on lexicographical tasks, but still being autonomous in the management of the tools. The creation and maintenance of the database, which is the core of the tool, demand no specific training in databases. A LMF compliant schema implemented in a Document Type Definition (DTD) describing the lexical resources is taken by the system to automatically configure the platform. Besides, the most standard web services for interoperability are also generated automatically. Other components of the platform include build-in functions supporting the most common tasks of the lexicographic work.

2007

pdf bib
Automatic Acquisition of Grammatical Types for Nouns
Núria Bel | Sergio Espeja | Montserrat Marimon
Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers

pdf bib
The Spanish Resource Grammar: Pre-processing Strategy and Lexical Acquisition
Montserrat Marimon | Núria Bel | Sergio Espeja | Natalia Seghezzi
ACL 2007 Workshop on Deep Linguistic Processing

2006

pdf bib
New tools for the encoding of lexical data extracted from corpus
Núria Bel | Sergio Espeja | Montserrat Marimon
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

This paper describes the methodology and tools that are the basis of our platform AAILE.4 AAILE has been built for supplying those working in the construction of lexicons for syntactic parsing with more efficient ways of visualizing and analyzing data extracted from corpus. The platform offers support using techniques such as similarity measures, clustering and pattern classification.

pdf bib
Lexical Markup Framework (LMF)
Gil Francopoulo | Monte George | Nicoletta Calzolari | Monica Monachini | Nuria Bel | Mandy Pet | Claudia Soria
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

Optimizing the production, maintenance and extension of lexical resources is one the crucial aspects impacting Natural Language Processing (NLP). A second aspect involves optimizing the process leading to their integration in applications. With this respect, we believe that the production of a consensual specification on lexicons can be a useful aid for the various NLP actors. Within ISO, the purpose of LMF is to define a standard for lexicons. LMF is a model that provides a common standardized framework for the construction of NLP lexicons. The goals of LMF are to provide a common model for the creation and use of lexical resources, to manage the exchange of data between and among these resources, and to enable the merging of large number of individual electronic resources to form extensive global electronic resources. In this paper, we describe the work in progress within the sub-group ISO-TC37/SC4/WG4. Various experts from a lot of countries have been consulted in order to take into account best practices in a lot of languages for (we hope) all kinds of NLP lexicons.

pdf bib
Lexical Markup Framework (LMF) for NLP Multilingual Resources
Gil Francopoulo | Nuria Bel | Monte George | Nicoletta Calzolari | Monica Monachini | Mandy Pet | Claudia Soria
Proceedings of the Workshop on Multilingual Language Resources and Interoperability

2004

pdf bib
Corpus representativeness for syntactic information acquisition
Núria Bel
Proceedings of the ACL Interactive Poster and Demonstration Sessions

pdf bib
Cost-effective Cross-lingual Document Classification
Núria Bel | Cornelis H.A. Koster | Marta Villegas
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
Lexical Entry Templates for Robust Deep Parsing
Montserrat Marimon | Núria Bel
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

We report on the development and employment of lexical entry templates in a large--coverage unification--based grammar of Spanish. The aim of the work reported in this paper is to provide robust deep linguistic processing in order to make the grammar more adequate for industrial NLP applications.

2002

pdf bib
From DTD to relational dB. An automatic generation of a lexicographical station out off ISLE guidelines
Marta Villegas | Nuria Bel
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)

pdf bib
Design and Evaluation of a SLDS for E-Mail Access through the Telephone
Nuria Bel | Javier Caminero | Luis Hernández | Montserrat Marimón | José F. Morlesín | Josep M. Otero | José Relaño | M. Carmen Rodríguez | Pedro M. Ruz | Daniel Tapias
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)

pdf bib
From Resources to Applications. Designing the Multilingual ISLE Lexical Entry
Sue Atkins | Nuria Bel | Francesca Bertagna | Pierrette Bouillon | Nicoletta Calzolari | Christiane Fellbaum | Ralph Grishman | Alessandro Lenci | Catherine MacLeod | Martha Palmer | Gregor Thurmair | Marta Villegas | Antonio Zampolli
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)

2001

pdf bib
The ISLE in the ocean. Transatlantic standards for multilingual lexicons (with an eye to machine translation)
Nicoletta Calzolari | Alessandro Lenci | Antonio Zampolli | Nuria Bel | Marta Villegas | Gregor Thurmair
Proceedings of Machine Translation Summit VIII

The ISLE project is a continuation of the long standing EAGLES initiative, carried out under the Human Language Technology (HLT) programme in collaboration between American and European groups in the framework of the EU-US International Research Co-operation, supported by NSF and EC. In this paper we concentrate on the current position of the ISLE Computational Lexicon Working Group (CLWG), whose activities aim at defining a general schema for a multilingual lexical entry (MILE), as the basis for a standard framework for multilingual computational lexicons. The needs and features of existing Machine Translation systems provide the main reference points for the process of consensual definition of the MILE. The overall structure of the MILE will be illustrated with particular attention to some of the issues raised for multilingual lexicons by the need of expressing complex transfer conditions among translation equivalents

2000

pdf bib
SIMPLE: A General Framework for the Development of Multilingual Lexicons
Nuria Bel | Federica Busa | Nicoletta Calzolari | Elisabetta Gola | Alessandro Lenci | Monica Monachini | Antoine Ogonowski | Ivonne Peters | Wim Peters | Nilda Ruimy | Marta Villegas | Antonio Zampolli
Proceedings of the Second International Conference on Language Resources and Evaluation (LREC’00)

pdf bib
Multilingual Linguistic Resources: From Monolingual Lexicons to Bilingual Interrelated Lexicons
Marta Villegas | Nuria Bel | Alessandro Lenci | Nicoletta Calzolari | Nilda Ruimy | Antonio Zampolli | Teresa Sadurní | Joan Soler
Proceedings of the Second International Conference on Language Resources and Evaluation (LREC’00)

Search
Co-authors