François Lareau

Also published as: Francois Lareau


2023

pdf bib
Mod-D2T: A Multi-layer Dataset for Modular Data-to-Text Generation
Simon Mille | Francois Lareau | Stamatia Dasiopoulou | Anya Belz
Proceedings of the 16th International Natural Language Generation Conference

Rule-based text generators lack the coverage and fluency of their neural counterparts, but have two big advantages over them: (i) they are entirely controllable and do not hallucinate; and (ii) they can fully explain how an output was generated from an input. In this paper we leverage these two advantages to create large and reliable synthetic datasets with multiple human-intelligible intermediate representations. We present the Modular Data-to-Text (Mod-D2T) Dataset which incorporates ten intermediate-level representations between input triple sets and output text; the mappings from one level to the next can broadly be interpreted as the traditional modular tasks of an NLG pipeline. We describe the Mod-D2T dataset, evaluate its quality via manual validation and discuss its applications and limitations. Data, code and documentation are available at https://github.com/mille-s/Mod-D2T.

pdf bib
Proceedings of the Seventh International Conference on Dependency Linguistics (Depling, GURT/SyntaxFest 2023)
Owen Rambow | François Lareau
Proceedings of the Seventh International Conference on Dependency Linguistics (Depling, GURT/SyntaxFest 2023)

pdf bib
Predicates and entities in Abstract Meaning Representation
Antoine Venant | François Lareau
Proceedings of the Seventh International Conference on Dependency Linguistics (Depling, GURT/SyntaxFest 2023)

Nodes in Abstract Meaning Representation (AMR) are generally thought of as neo-Davidsonian entities. We review existing translation into neo-Davidsonian representations and show that these translations inconsistently handle copula sentences. We link the problem to an asymmetry arising from a problematic handling of words with no associated PropBank frames for the underlying predicate. We introduce a method to automatically and uniformly decompose AMR nodes into an entity-part and a predicative part, which offers a consistent treatment of copula sentences and quasi- predicates such as brother or client.

2022

pdf bib
A Methodology for Building a Diachronic Dataset of Semantic Shifts and its Application to QC-FR-Diac-V1.0, a Free Reference for French
David Kletz | Philippe Langlais | François Lareau | Patrick Drouin
Proceedings of the Thirteenth Language Resources and Evaluation Conference

Different algorithms have been proposed to detect semantic shifts (changes in a word meaning over time) in a diachronic corpus. Yet, and somehow surprisingly, no reference corpus has been designed so far to evaluate them, leaving researchers to fallback to troublesome evaluation strategies. In this work, we introduce a methodology for the construction of a reference dataset for the evaluation of semantic shift detection, that is, a list of words where we know for sure whether they present a word meaning change over a period of interest. We leverage a state-of-the-art word-sense disambiguation model to associate a date of first appearance to all the senses of a word. Significant changes in sense distributions as well as clear stability are detected and the resulting words are inspected by experts using a dedicated interface before populating a reference dataset. As a proof of concept, we apply this methodology to a corpus of newspapers from Quebec covering the whole 20th century. We manually verified a subset of candidates, leading to QC-FR-Diac-V1.0, a corpus of 151 words allowing one to evaluate the identification of semantic shifts in French between 1910 and 1990.

pdf bib
Handling Idioms in Symbolic Multilingual Natural Language Generation
Michaelle Dubé | François Lareau
Proceedings of the 18th Workshop on Multiword Expressions @LREC2022

While idioms are usually very rigid in their expression, they sometimes allow a certain level of freedom in their usage, with modifiers or complements splitting them or being syntactically attached to internal nodes rather than to the root (e.g., “take something with a big grain of salt”). This means that they cannot always be handled as ready-made strings in rule-based natural language generation systems. Having access to the internal syntactic structure of an idiom allows for more subtle processing. We propose a way to enumerate all possible language-independent n-node trees and to map particular idioms of a language onto these generic syntactic patterns. Using this method, we integrate the idioms from the LN-fr into GenDR, a multilingual realizer. Our implementation covers nearly 98% of LN-fr’s idioms with high precision, and can easily be extended or ported to other languages.

2019

pdf bib
Multilingual sentence-level bias detection in Wikipedia
Desislava Aleksandrova | François Lareau | Pierre André Ménard
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019)

We propose a multilingual method for the extraction of biased sentences from Wikipedia, and use it to create corpora in Bulgarian, French and English. Sifting through the revision history of the articles that at some point had been considered biased and later corrected, we retrieve the last tagged and the first untagged revisions as the before/after snapshots of what was deemed a violation of Wikipedia’s neutral point of view policy. We extract the sentences that were removed or rewritten in that edit. The approach yields sufficient data even in the case of relatively small Wikipedias, such as the Bulgarian one, where 62k articles produced 5k biased sentences. We evaluate our method by manually annotating 520 sentences for Bulgarian and French, and 744 for English. We assess the level of noise and analyze its sources. Finally, we exploit the data with well-known classification methods to detect biased sentences. Code and datasets are hosted at https://github.com/crim-ca/wiki-bias.

2018

pdf bib
GenDR: A Generic Deep Realizer with Complex Lexicalization
François Lareau | Florie Lambrey | Ieva Dubinskaite | Daniel Galarreta-Piquette | Maryam Nejat
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf bib
Retrieving Information from the French Lexical Network in RDF/OWL Format
Alexsandro Fonseca | Fatiha Sadat | François Lareau
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

2016

pdf bib
Encoding a syntactic dictionary into a super granular unification grammar
Sylvain Kahane | François Lareau
Proceedings of the Workshop on Grammar and Lexicon: interactions and interfaces (GramLex)

We show how to turn a large-scale syntactic dictionary into a dependency-based unification grammar where each piece of lexical information calls a separate rule, yielding a super granular grammar. Subcategorization, raising and control verbs, auxiliaries and copula, passivization, and tough-movement are discussed. We focus on the semantics-syntax interface and offer a new perspective on syntactic structure.

pdf bib
Lexfom: a lexical functions ontology model
Alexsandro Fonseca | Fatiha Sadat | François Lareau
Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon (CogALex - V)

A lexical function represents a type of relation that exists between lexical units (words or expressions) in any language. For example, the antonymy is a type of relation that is represented by the lexical function Anti: Anti(big) = small. Those relations include both paradigmatic relations, i.e. vertical relations, such as synonymy, antonymy and meronymy and syntagmatic relations, i.e. horizontal relations, such as objective qualification (legitimate demand), subjective qualification (fruitful analysis), positive evaluation (good review) and support verbs (pay a visit, subject to an interrogation). In this paper, we present the Lexical Functions Ontology Model (lexfom) to represent lexical functions and the relation among lexical units. Lexfom is divided in four modules: lexical function representation (lfrep), lexical function family (lffam), lexical function semantic perspective (lfsem) and lexical function relations (lfrel). Moreover, we show how it combines to Lexical Model for Ontologies (lemon), for the transformation of lexical networks into the semantic web formats. So far, we have implemented 100 simple and 500 complex lexical functions, and encoded about 8,000 syntagmatic and 46,000 paradigmatic relations, for the French language.

2015

pdf bib
La séparation des composantes lexicale et flexionnelle des vecteurs de mots
François Lareau | Gabriel Bernier-Colborne | Patrick Drouin
Actes de la 22e conférence sur le Traitement Automatique des Langues Naturelles. Articles courts

En sémantique distributionnelle, le sens des mots est modélisé par des vecteurs qui représentent leur distribution en corpus. Les modèles étant souvent calculés sur des corpus sans pré-traitement linguistique poussé, ils ne permettent pas de rendre bien compte de la compositionnalité morphologique des mots-formes. Nous proposons une méthode pour décomposer les vecteurs de mots en vecteurs lexicaux et flexionnels.

pdf bib
Le traitement des collocations en génération de texte multilingue
Florie Lambrey | François Lareau
Actes de la 22e conférence sur le Traitement Automatique des Langues Naturelles. Articles courts

Pour concevoir des générateurs automatiques de texte génériques qui soient facilement réutilisables d’une langue et d’une application à l’autre, il faut modéliser les principaux phénomènes linguistiques qu’on retrouve dans les langues en général. Un des phénomènes fondamentaux qui demeurent problématiques pour le TAL est celui des collocations, comme grippe carabinée, peur bleue ou désir ardent, où un sens (ici, l’intensité) ne s’exprime pas de la même façon selon l’unité lexicale qu’il modifie. Dans la lexicographie explicative et combinatoire, on modélise les collocations au moyen de fonctions lexicales qui correspondent à des patrons récurrents de collocations. Par exemple, les expressions mentionnées ici se décrivent au moyen de la fonction Magn : Magn(PEUR) = BLEUE, Magn(GRIPPE) = CARABINÉE, etc. Il existe des centaines de fonctions lexicales. Dans cet article, nous nous intéressons à l’implémentation d’un sous-ensemble de fonctions qui décrivent les verbes supports et certains types de modificateurs.

2012

pdf bib
Is Bad Structure Better Than No Structure?: Unsupervised Parsing for Realisation Ranking
Yasaman Motazedi | Mark Dras | François Lareau
Proceedings of COLING 2012

2011

pdf bib
Collocations in Multilingual Natural Language Generation: Lexical Functions meet Lexical Functional Grammar
François Lareau | Mark Dras | Benjamin Börschinger | Robert Dale
Proceedings of the Australasian Language Technology Association Workshop 2011

pdf bib
Detecting Interesting Event Sequences for Sports Reporting
François Lareau | Mark Dras | Robert Dale
Proceedings of the 13th European Workshop on Natural Language Generation

2007

pdf bib
Vers une formalisation des décompositions sémantiques dans la Grammaire d’Unification Sens-Texte
François Lareau
Actes de la 14ème conférence sur le Traitement Automatique des Langues Naturelles. Posters

Nous proposons une formalisation de la décomposition du sens dans le cadre de la Grammaire d’Unification Sens-Texte. Cette formalisation vise une meilleure intégration des décompositions sémantiques dans un modèle global de la langue. Elle repose sur un jeu de saturation de polarités qui permet de contrôler la construction des représentations décomposées ainsi que leur mise en correspondance avec des arbres syntaxiques qui les expriment. Le formalisme proposé est illustré ici dans une perspective de synthèse, mais il s’applique également en analyse.

2005

pdf bib
Grammaire d’Unification Sens-Texte : modularité et polarisation
Sylvain Kahane | François Lareau
Actes de la 12ème conférence sur le Traitement Automatique des Langues Naturelles. Articles longs

L’objectif de cet article est de présenter l’état actuel du modèle de la Grammaire d’Unification Sens-Texte, notamment depuis que les bases formelles du modèle ont été éclaircies grâce au développement des Grammaires d’Unification Polarisées. L’accent est mis sur l’architecture du modèle et le rôle de la polarisation dans l’articulation des différents modules — l’interface sémantique-syntaxe, l’interface syntaxe-morphotopologie et les grammaires décrivant les différents niveaux de représentation. Nous étudions comment les procédures d’analyse et de génération sont contrôlables par différentes stratégies de neutralisation des différentes polarités.