Laurent Romary

Also published as: L. Romary


2023

pdf bib
ISO LMF 24613-6: A Revised Syntax Semantics Module for the Lexical Markup Framework
Francesca Frontini | Laurent Romary | Anas Fahad Khan
Proceedings of the 4th Conference on Language, Data and Knowledge

pdf bib
CamemBERT-bio : Un modèle de langue français savoureux et meilleur pour la santé
Rian Touchent | Laurent Romary | Eric De La Clergerie
Actes de CORIA-TALN 2023. Actes de la 30e Conférence sur le Traitement Automatique des Langues Naturelles (TALN), volume 1 : travaux de recherche originaux -- articles longs

Les données cliniques dans les hôpitaux sont de plus en plus accessibles pour la recherche à travers les entrepôts de données de santé, cependant ces documents sont non-structurés. Il est donc nécessaire d’extraire les informations des comptes-rendus médicaux. L’utilisation du transfert d’apprentissage grâce à des modèles de type BERT comme CamemBERT ont permis des avancées majeures, notamment pour la reconnaissance d’entités nommées. Cependant, ces modèles sont entraînés pour le langage courant et sont moins performants sur des données biomédicales. C’est pourquoi nous proposons un nouveau jeu de données biomédical public français sur lequel nous avons poursuivi le pré-entraînement de CamemBERT. Ainsi, nous présentons une première version de CamemBERT-bio, un modèle public spécialisé pour le domaine biomédical français qui montre un gain de 2,54 points de F-mesure en moyenne sur différents jeux d’évaluations de reconnaissance d’entités nommées biomédicales.

pdf bib
MaTOS: Traduction automatique pour la science ouverte
Maud Bénard | Alexandra Mestivier | Natalie Kubler | Lichao Zhu | Rachel Bawden | Eric De La Clergerie | Laurent Romary | Mathilde Huguin | Jean-François Nominé | Ziqian Peng | François Yvon
Actes de CORIA-TALN 2023. Actes de l'atelier "Analyse et Recherche de Textes Scientifiques" (ARTS)@TALN 2023

Cette contribution présente le projet MaTOS (Machine Translation for Open Science), qui vise à développer de nouvelles méthodes pour la traduction automatique (TA) intégrale de documents scientifiques entre le français et l’anglais, ainsi que des métriques automatiques pour évaluer la qualité des traductions produites. Pour ce faire, MaTOS s’intéresse (a) au recueil de ressources ouvertes pour la TA spécialisée; (b) à la description des marqueurs de cohérence textuelle pour les articles scientifiques; (c) au développement de nouvelles méthodes de traitement multilingue pour les documents; (d) aux métriques mesurant les progrès de la traduction de documents complets.

2022

pdf bib
BERTrade: Using Contextual Embeddings to Parse Old French
Loïc Grobol | Mathilde Regnault | Pedro Ortiz Suarez | Benoît Sagot | Laurent Romary | Benoit Crabbé
Proceedings of the Thirteenth Language Resources and Evaluation Conference

The successes of contextual word embeddings learned by training large-scale language models, while remarkable, have mostly occurred for languages where significant amounts of raw texts are available and where annotated data in downstream tasks have a relatively regular spelling. Conversely, it is not yet completely clear if these models are also well suited for lesser-resourced and more irregular languages. We study the case of Old French, which is in the interesting position of having relatively limited amount of available raw text, but enough annotated resources to assess the relevance of contextual word embedding models for downstream NLP tasks. In particular, we use POS-tagging and dependency parsing to evaluate the quality of such models in a large array of configurations, including models trained from scratch from small amounts of raw text and models pre-trained on other languages but fine-tuned on Medieval French data.

pdf bib
Towards a Cleaner Document-Oriented Multilingual Crawled Corpus
Julien Abadji | Pedro Ortiz Suarez | Laurent Romary | Benoît Sagot
Proceedings of the Thirteenth Language Resources and Evaluation Conference

The need for large corpora raw corpora has dramatically increased in recent years with the introduction of transfer learning and semi-supervised learning methods to Natural Language Processing. And while there have been some recent attempts to manually curate the amount of data necessary to train large language models, the main way to obtain this data is still through automatic web crawling. In this paper we take the existing multilingual web corpus OSCAR and its pipeline Ungoliant that extracts and classifies data from Common Crawl at the line level, and propose a set of improvements and automatic annotations in order to produce a new document-oriented version of OSCAR that could prove more suitable to pre-train large generative language models as well as hopefully other applications in Natural Language Processing and Digital Humanities.

2021

pdf bib
Building A Corporate Corpus For Threads Constitution
Lionel Tadonfouet Tadjou | Fabrice Bourge | Tiphaine Marie | Laurent Romary | Éric de la Clergerie
Proceedings of the Student Research Workshop Associated with RANLP 2021

In this paper we describe the process of build-ing a corporate corpus that will be used as a ref-erence for modelling and computing threadsfrom conversations generated using commu-nication and collaboration tools. The overallgoal of the reconstruction of threads is to beable to provide value to the collorator in var-ious use cases, such as higlighting the impor-tant parts of a running discussion, reviewingthe upcoming commitments or deadlines, etc. Since, to our knowledge, there is no avail-able corporate corpus for the French languagewhich could allow us to address this prob-lem of thread constitution, we present here amethod for building such corpora includingdifferent aspects and steps which allowed thecreation of a pipeline to pseudo-anonymisedata. Such a pipeline is a response to theconstraints induced by the General Data Pro-tection Regulation GDPR in Europe and thecompliance to the secrecy of correspondence.

2020

pdf bib
Modelling Etymology in LMF/TEI: The Grande Dicionário Houaiss da Língua Portuguesa Dictionary as a Use Case
Fahad Khan | Laurent Romary | Ana Salgado | Jack Bowers | Mohamed Khemakhem | Toma Tasovac
Proceedings of the Twelfth Language Resources and Evaluation Conference

In this article we will introduce two of the new parts of the new multi-part version of the Lexical Markup Framework (LMF) ISO standard, namely part 3 of the standard (ISO 24613-3), which deals with etymological and diachronic data, and Part 4 (ISO 24613-4), which consists of a TEI serialisation of all of the prior parts of the model. We will demonstrate the use of both standards by describing the LMF encoding of a small number of examples taken from a sample conversion of the reference Portuguese dictionary Grande Dicionário Houaiss da Língua Portuguesa, part of a broader experiment comprising the analysis of different, heterogeneously encoded, Portuguese lexical resources. We present the examples in the Unified Modelling Language (UML) and also in a couple of cases in TEI.

pdf bib
Establishing a New State-of-the-Art for French Named Entity Recognition
Pedro Javier Ortiz Suárez | Yoann Dupont | Benjamin Muller | Laurent Romary | Benoît Sagot
Proceedings of the Twelfth Language Resources and Evaluation Conference

The French TreeBank developed at the University Paris 7 is the main source of morphosyntactic and syntactic annotations for French. However, it does not include explicit information related to named entities, which are among the most useful information for several natural language processing tasks and applications. Moreover, no large-scale French corpus with named entity annotations contain referential information, which complement the type and the span of each mention with an indication of the entity it refers to. We have manually annotated the French TreeBank with such information, after an automatic pre-annotation step. We sketch the underlying annotation guidelines and we provide a few figures about the resulting annotations.

pdf bib
A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages
Pedro Javier Ortiz Suárez | Laurent Romary | Benoît Sagot
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.

pdf bib
CamemBERT: a Tasty French Language Model
Louis Martin | Benjamin Muller | Pedro Javier Ortiz Suárez | Yoann Dupont | Laurent Romary | Éric de la Clergerie | Djamé Seddah | Benoît Sagot
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Pretrained language models are now ubiquitous in Natural Language Processing. Despite their success, most available models have either been trained on English data or on the concatenation of data in multiple languages. This makes practical use of such models –in all languages except English– very limited. In this paper, we investigate the feasibility of training monolingual Transformer-based language models for other languages, taking French as an example and evaluating our language models on part-of-speech tagging, dependency parsing, named entity recognition and natural language inference tasks. We show that the use of web crawled data is preferable to the use of Wikipedia data. More surprisingly, we show that a relatively small web crawled dataset (4GB) leads to results that are as good as those obtained using larger datasets (130+GB). Our best performing model CamemBERT reaches or improves the state of the art in all four downstream tasks.

pdf bib
Les modèles de langue contextuels Camembert pour le français : impact de la taille et de l’hétérogénéité des données d’entrainement (C AMEM BERT Contextual Language Models for French: Impact of Training Data Size and Heterogeneity )
Louis Martin | Benjamin Muller | Pedro Javier Ortiz Suárez | Yoann Dupont | Laurent Romary | Éric Villemonte de la Clergerie | Benoît Sagot | Djamé Seddah
Actes de la 6e conférence conjointe Journées d'Études sur la Parole (JEP, 33e édition), Traitement Automatique des Langues Naturelles (TALN, 27e édition), Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (RÉCITAL, 22e édition). Volume 2 : Traitement Automatique des Langues Naturelles

Les modèles de langue neuronaux contextuels sont désormais omniprésents en traitement automatique des langues. Jusqu’à récemment, la plupart des modèles disponibles ont été entraînés soit sur des données en anglais, soit sur la concaténation de données dans plusieurs langues. L’utilisation pratique de ces modèles — dans toutes les langues sauf l’anglais — était donc limitée. La sortie récente de plusieurs modèles monolingues fondés sur BERT (Devlin et al., 2019), notamment pour le français, a démontré l’intérêt de ces modèles en améliorant l’état de l’art pour toutes les tâches évaluées. Dans cet article, à partir d’expériences menées sur CamemBERT (Martin et al., 2019), nous montrons que l’utilisation de données à haute variabilité est préférable à des données plus uniformes. De façon plus surprenante, nous montrons que l’utilisation d’un ensemble relativement petit de données issues du web (4Go) donne des résultats aussi bons que ceux obtenus à partir d’ensembles de données plus grands de deux ordres de grandeurs (138Go).

2017

pdf bib
TBX in ODD: Schema-agnostic specification and documentation for TermBase eXchange
Stefan Pernes | Laurent Romary
Proceedings of Language, Ontology, Terminology and Knowledge Structures Workshop (LOTKS 2017)

2016

pdf bib
TermITH-Eval: a French Standard-Based Resource for Keyphrase Extraction Evaluation
Adrien Bougouin | Sabine Barreaux | Laurent Romary | Florian Boudin | Béatrice Daille
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

Keyphrase extraction is the task of finding phrases that represent the important content of a document. The main aim of keyphrase extraction is to propose textual units that represent the most important topics developed in a document. The output keyphrases of automatic keyphrase extraction methods for test documents are typically evaluated by comparing them to manually assigned reference keyphrases. Each output keyphrase is considered correct if it matches one of the reference keyphrases. However, the choice of the appropriate textual unit (keyphrase) for a topic is sometimes subjective and evaluating by exact matching underestimates the performance. This paper presents a dataset of evaluation scores assigned to automatically extracted keyphrases by human evaluators. Along with the reference keyphrases, the manual evaluations can be used to validate new evaluation measures. Indeed, an evaluation measure that is highly correlated to the manual evaluation is appropriate for the evaluation of automatic keyphrase extraction methods.

2015

pdf bib
Automatic Construction of a TMF Terminological Database using a Transducer Cascade
Chihebeddine Ammar | Kais Haddar | Laurent Romary
Proceedings of the International Conference Recent Advances in Natural Language Processing

2014

pdf bib
Book Review: Natural Language Processing for Historical Texts by Michael Piotrowski
Laurent Romary
Computational Linguistics, Volume 40, Issue 1 - March 2014

2012

pdf bib
Collaborative Machine Translation Service for Scientific texts
Patrik Lambert | Jean Senellart | Laurent Romary | Holger Schwenk | Florian Zipser | Patrice Lopez | Frédéric Blain
Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics

2010

pdf bib
ISO-TimeML: An International Standard for Semantic Annotation
James Pustejovsky | Kiyong Lee | Harry Bunt | Laurent Romary
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

In this paper, we present ISO-TimeML, a revised and interoperable version of the temporal markup language, TimeML. We describe the changes and enrichments made, while framing the effort in a more general methodology of semantic annotation. In particular, we assume a principled distinction between the annotation of an expression and the representation which that annotation denotes. This involves not only the specification of an annotation language for a particular phenomenon, but also the development of a meta-model that allows one to interpret the syntactic expressions of the specification semantically.

pdf bib
MLIF : A Metamodel to Represent and Exchange Multilingual Textual Information
Samuel Cruz-Lara | Gil Francopoulo | Laurent Romary | Nasredine Semmar
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

The fast evolution of language technology has produced pressing needs in standardization. The multiplicity of language resources representation levels and the specialization of these representations make difficult the interaction between linguistic resources and components manipulating these resources. In this paper, we describe the MultiLingual Information Framework (MLIF ― ISO CD 24616). MLIF is a metamodel which allows the representation and the exchange of multilingual textual information. This generic metamodel is designed to provide a common platform for all the tools developed around the existing multilingual data exchange formats. This platform provides, on the one hand, a set of generic data categories for various application domains, and on the other hand, strategies for the interoperability with existing standards. The objective is to reach a better convergence between heterogeneous standardisation activities that are taking place in the domain of data modeling (XML; W3C), text management (TEI; TEIC), multilingual information (TMX-LISA; XLIFF-OASIS) and multimedia (SMILText; W3C). This is a work in progress within ISO-TC37 in order to define a new ISO standard.

pdf bib
Towards an ISO Standard for Dialogue Act Annotation
Harry Bunt | Jan Alexandersson | Jean Carletta | Jae-Woong Choe | Alex Chengyu Fang | Koiti Hasida | Kiyong Lee | Volha Petukhova | Andrei Popescu-Belis | Laurent Romary | Claudia Soria | David Traum
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

This paper describes an ISO project which aims at developing a standard for annotating spoken and multimodal dialogue with semantic information concerning the communicative functions of utterances, the kind of semantic content they address, and their relations with what was said and done earlier in the dialogue. The project, ISO 24617-2 ""Semantic annotation framework, Part 2: Dialogue acts"", is currently at DIS stage. The proposed annotation schema distinguishes 9 orthogonal dimensions, allowing each functional segment in dialogue to have a function in each of these dimensions, thus accounting for the multifunctionality that utterances in dialogue often have. A number of core communicative functions is defined in the form of ISO data categories, available at http://semantic-annotation.uvt.nl/dialogue-acts/iso-datcats.pdf; they are divided into ""dimension-specific"" functions, which can be used only in a particular dimension, such as Turn Accept in the Turn Management dimension, and ""general-purpose"" functions, which can be used in any dimension, such as Inform and Request. An XML-based annotation language, ""DiAML"" is defined, with an abstract syntax, a semantics, and a concrete syntax.

pdf bib
GRISP: A Massive Multilingual Terminological Database for Scientific and Technical Domains
Patrice Lopez | Laurent Romary
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

The development of a multilingual terminology is a very long and costly process. We present the creation of a multilingual terminological database called GRISP covering multiple technical and scientific fields from various open resources. A crucial aspect is the merging of the different resources which is based in our proposal on the definition of a sound conceptual model, different domain mapping and the use of structural constraints and machine learning techniques for controlling the fusion process. The result is a massive terminological database of several millions terms, concepts, semantic relations and definitions. The accuracy of the concept merging between several resources have been evaluated following several methods. This resource has allowed us to improve significantly the mean average precision of an information retrieval system applied to a large collection of multilingual and multidomain patent documents. New specialized terminologies, not specifically created for text processing applications, can be aggregated and merged to GRISP with minimal manual efforts.

pdf bib
HUMB: Automatic Key Term Extraction from Scientific Articles in GROBID
Patrice Lopez | Laurent Romary
Proceedings of the 5th International Workshop on Semantic Evaluation

2008

pdf bib
Foundation of a Component-based Flexible Registry for Language Resources and Technology
Daan Broeder | Thierry Declerck | Erhard Hinrichs | Stelios Piperidis | Laurent Romary | Nicoletta Calzolari | Peter Wittenburg
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)

Within the CLARIN e-science infrastructure project it is foreseen to develop a component-based registry for metadata for Language Resources and Language Technology. With this registry it is hoped to overcome the problems of the current available systems with respect to inflexible fixed schema, unsuitable terminology and interoperability problems. The registry will address interoperability needs by refering to a shared vocabulary registered in data category registries as they are suggested by ISO.

2006

pdf bib
An API for accessing the Data Category Registry
Marc Kemps-Snijders | Julien Ducret | Laurent Romary | Peter Wittenburg
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

Central Ontologies are increasingly important to manage interoperability between different types of language resources. This was the reason for ISO to set up a new committee ISO TC37/SC4 taking care of language resource management issues. Central to the work of this committee is the definition of a framework for a central registry of data categories that are important in the domain of language resources. This paper describes an application programming interface that was designed to request services from this data category registry. The DCR is operational and the described API has already been tested from a lexicon application.

pdf bib
Foundations of Modern Language Resource Archives
Peter Wittenburg | Daan Broeder | Wolfgang Klein | Stephen Levinson | Laurent Romary
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

A number of serious reasons will convince an increasing amount of researchers to store their relevant material in centers which we will call "language resource archives". They combine the duty of taking care of long-term preservation as well as the task to give access to their material to different user groups. Access here is meant in the sense that an active interaction with the data will be made possible to support the integration of new data, new versions or commentaries of all sorts. Modern Language Resource Archives will have to adhere to a number of basic principles to fulfill all requirements and they will have to be involved in federations to create joint language resource domains making it even simpler for the researchers to access the data. This paper makes an attempt to formulate the essential pillars language resource archives have to adhere to.

pdf bib
Metadata Profile in the ISO Data Category Registry
Freddy Offenga | Daan Broeder | Peter Wittenburg | Julien Ducret | Laurent Romary
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

Metadata descriptions of language resources become an increasing necessity since the shear amount of language resources is increasing rapidly and especially since we are now creating infrastuctures to access these resources via the web through integrated domains of language resource archives. Yet, the metadata frameworks offered for the domain of language resources (IMDI and OLAC), although mature, are not as widely accepted as necessary. The lack of confidence in the stability and persistence of the concepts and formats introduced by these metadata sets seems to be one argument for people to not invest the time needed for metadata creation. The introduction of these concepts into an ISO standardization process may convince contributors to make use of the terminology. The availability of the ISO Data Category Registry that includes a metadata profile will also offer the opportunity for researchers to construct their own metadata set tailored to the needs of the project at hand, but nevertheless supporting interoperability.

pdf bib
Representing Linguistic Corpora and Their Annotations
Nancy Ide | Laurent Romary
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

A Linguistic Annotation Framework (LAF) is being developed within the International Standards Organization Technical Committee 37 Sub-committee on Language Resource Management (ISO TC37 SC4). LAF is intended to provide a standardized means to represent linguistic data and its annotations that is defined broadly enough to accommodate all types of linguistic annotations, and at the same time provide means to represent precise and potentially complex linguistic information. The general principles informing the design of LAF have been previously reported (Ide and Romary, 2003; Ide and Romary, 2004a). This paper describes some of the more technical aspects of the LAF design that have been addressed in the process of finalizing the specifications for the standard.

pdf bib
A Lexicalized Tree-Adjoining Grammar for Vietnamese
H. Phuong Le | T. M. Huyen Nguyen | Laurent Romary | Azim Roussanaly
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)

In this paper, we present the first sizable grammar built for Vietnamese using LTAG, developed over the past two years, named vnLTAG. This grammar aims at modelling written language and is general enough to be both application- and domain-independent. It can be used for the morpho-syntactic tagging and syntactic parsing of Vietnamese texts, as well as text generation. We then present a robust parsing scheme using vnLTAG and a parser for the grammar. We finish with an evaluation using a test suite.

2004

pdf bib
Towards a Reference Annotation Framework
Susanne Salmon-Alt | Laurent Romary
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
Developping Tools and Building Linguistic Resources for Vietnamese Morpho-syntactic Processing
Thanh Bon Nguyen | Thi Minh Huyen Nguyen | Laurent Romary | Xuan Luong Vu
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
Online Evaluation of Coreference Resolution
Andrei Popescu-Belis | Loïs Rigouste | Susanne Salmon-Alt | Laurent Romary
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
Multimodal Meaning Representation for Generic Dialogue Systems Architectures
Frédéric Landragin | Alexandre Denis | Annalisa Ricci | Laurent Romary
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

An unified language for the communicative acts between agents is essential for the design of multi-agents architectures. Whatever the type of interaction (linguistic, multimodal, including particular aspects such as force feedback), whatever the type of application (command dialogue, request dialogue, database querying), the concepts are common and we need a generic meta-model. In order to tend towards task-independent systems, we need to clarify the modules parameterization procedures. In this paper, we focus on the characteristics of a meta-model designed to represent meaning in linguistic and multimodal applications. This meta-model is called MMIL for MultiModal Interface Language, and has first been specified in the framework of the IST MIAMM European project. What we want to test here is how relevant is MMIL for a completely different context (a different task, a different interaction type, a different linguistic domain). We detail the exploitation of MMIL in the framework of the IST OZONE European project, and we draw the conclusions on the role of MMIL in the parameterization of task-independent dialogue managers.

pdf bib
The French MEDIA/EVALDA Project: the Evaluation of the Understanding Capability of Spoken Language Dialogue Systems
Laurence Devillers | Hélène Maynard | Sophie Rosset | Patrick Paroubek | Kevin McTait | D. Mostefa | Khalid Choukri | Laurent Charnay | Caroline Bousquet | Nadine Vigouroux | Frédéric Béchet | Laurent Romary | Jean-Yves Antoine | J. Villaneau | Myriam Vergnes | J. Goulian
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

The aim of the MEDIA project is to design and test a methodology for the evaluat ion of context-dependent and independent spoken dialogue systems. We propose an evaluation paradigm based on the use of test suites from real-world corpora and a common semantic representation and common metrics. This paradigm should allow us to diagnose the context-sensitive understanding capability of dialogue system s. This paradigm will be used within an evaluation campaign involving several si tes all of which will carry out the task of querying information from a database .

pdf bib
A Large Metadata Domain of Language Resources
Daan Broeder | Thierry Declerck | Laurent Romary | Markus Uneson | Sven Strömqvist | Peter Wittenburg
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
A Registry of Standard Data Categories for Linguistic Annotation
Nancy Ide | Laurent Romary
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
Experiments on Building Language Resources for Multi-Modal Dialogue Systems
Laurent Romary | Amalia Todirascu | David Langlois
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
Towards an International Standard on Feature Structure Representation
Kiyong Lee | Lou Burnard | Laurent Romary | Eric de la Clergerie | Thierry Declerck | Syd Bauman | Harry Bunt | Lionel Clément | Tomaž Erjavec | Azim Roussanaly | Claude Roux
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
Standardization in Multimodal Content Representation: Some Methodological Issues
Harry Bunt | Laurent Romary
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

pdf bib
La FREEBANK : vers une base libre de corpus annotés
Susanne Salmon-Alt | Eckhard Bick | Laurent Romary | Jean-Marie Pierrel
Actes de la 11ème conférence sur le Traitement Automatique des Langues Naturelles. Articles longs

Les corpus français librement accessibles annotés à d’autres niveaux linguistiques que morpho-syntaxique sont insuffisants à la fois quantitativement et qualitativement. Partant de ce constat, la FREEBANK – construite sur la base d’outils d’analyse automatique dont la sortie est révisée manuellement – se veut une base de corpus du français annotés à plusieurs niveaux (structurel, morphologique, syntaxique, coréférentiel) et à différents degrés de finesse linguistique qui soit libre d’accès, codée selon des schémas normalisés, intégrant des ressources existantes et ouverte à l’enrichissement progressif.

pdf bib
An Extensible Framework for Efficient Document Management using RDF and OWL
Erica Meena | Ashwani Kumar | Laurent Romary
Proceeedings of the Workshop on NLP and XML (NLPXML-2004): RDF/RDFS and OWL in Language Technology

pdf bib
Construction of Grammar Based Term Extraction Model for Japanese
Koichi Takeuchi | Kyo Kageura | Béatrice Daille | Laurent Romary
Proceedings of CompuTerm 2004: 3rd International Workshop on Computational Terminology

pdf bib
Standards going concrete : from LMF to Morphalou
Laurent Romary | Susanne Salmon-Alt | Gil Francopoulo
Proceedings of the Workshop on Enhancing and Using Electronic Dictionaries

2003

pdf bib
International Standard for a Linguistic Annotation Framework
Nancy Ide | Laurent Romary | Eric de la Clergerie
Proceedings of the HLT-NAACL 2003 Workshop on Software Engineering and Architecture of Language Technology Systems (SEALTS)

pdf bib
Outline of the International Standard Linguistic Annotation Framework
Nancy Ide | Laurent Romary
Proceedings of the ACL 2003 Workshop on Linguistic Annotation: Getting the Model Right

pdf bib
SYSTRAN new generation: the XML translation workflow
Jean Senellart | Christian Boitet | Laurent Romary
Proceedings of Machine Translation Summit IX: Papers

Customization of Machine Translation (MT) is a prerequisite for corporations to adopt the technology. It is therefore important but nonetheless challenging. Ongoing implementation proves that XML is an excellent exchange device between MT modules that efficiently enables interaction between the user and the processes to reach highly granulated structure-based customization. Accomplished through an innovative approach called the SYSTRAN Translation Stylesheet, this method is coherent with the current evolution of the “authoring process”. As a natural progression, the next stage in the customization process is the integration of MT in a multilingual tool kit designed for the “authoring process”.

2002

pdf bib
Towards Reusable NLP Components
Amalia Todirascu | Eric Kow | Laurent Romary
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)

pdf bib
LREP: A Language Repository Exchange Protocol
Daan Broeder | Peter Wittenburg | Thierry Declerck | Laurent Romary
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)

pdf bib
Standards for Language Resources
Nancy Ide | Laurent Romary
Proceedings of the Third International Conference on Language Resources and Evaluation (LREC’02)

2001

pdf bib
A Common Framework for Syntactic Annotation
Nancy Ide | Laurent Romary
Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics

2000

pdf bib
XCES: An XML-based Encoding Standard for Linguistic Corpora
Nancy Ide | Patrice Bonhomme | Laurent Romary
Proceedings of the Second International Conference on Language Resources and Evaluation (LREC’00)

1999

pdf bib
The MATE meta-scheme for coreference in dialogues in multiple languages
M. Poesio | F. Bruneseaux | L. Romary
Towards Standards and Tools for Discourse Tagging

1998

pdf bib
Veins Theory: A Model of Global Discourse Cohesion and Coherence
Dan Cristea | Nancy Ide | Laurent Romary
36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 1

pdf bib
Veins Theory: A Model of Global Discourse Cohesion and Coherence
Dan Cristea | Nancy Ide | Laurent Romary
COLING 1998 Volume 1: The 17th International Conference on Computational Linguistics

1997

pdf bib
Constraints on the Use of Language, Gesture and Speech for Multimodal Dialogues
Bertrand Gaiffe | Laurent Romary
Referring Phenomena in a Multimedia Context and their Computational Treatment

Search
Co-authors