Aleksandre Maskharashvili


2022

pdf bib
Generating Discourse Connectives with Pre-trained Language Models: Conditioning on Discourse Relations Helps Reconstruct the PDTB
Symon Stevens-Guille | Aleksandre Maskharashvili | Xintong Li | Michael White
Proceedings of the 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue

We report results of experiments using BART (Lewis et al., 2019) and the Penn Discourse Tree Bank (Webber et al., 2019) (PDTB) to generate texts with correctly realized discourse relations. We address a question left open by previous research (Yung et al., 2021; Ko and Li, 2020) concerning whether conditioning the model on the intended discourse relation—which corresponds to adding explicit discourse relation information into the input to the model—improves its performance. Our results suggest that including discourse relation information in the input of the model significantly improves the consistency with which it produces a correctly realized discourse relation in the output. We compare our models’ performance to known results concerning the discourse structures found in written text and their possible explanations in terms of discourse interpretation strategies hypothesized in the psycholinguistics literature. Our findings suggest that natural language generation models based on current pre-trained Transformers will benefit from infusion with discourse level information if they aim to construct discourses with the intended relations.

2021

pdf bib
Neural Methodius Revisited: Do Discourse Relations Help with Pre-Trained Models Too?
Aleksandre Maskharashvili | Symon Stevens-Guille | Xintong Li | Michael White
Proceedings of the 14th International Conference on Natural Language Generation

Recent developments in natural language generation (NLG) have bolstered arguments in favor of re-introducing explicit coding of discourse relations in the input to neural models. In the Methodius corpus, a meaning representation (MR) is hierarchically structured and includes discourse relations. Meanwhile pre-trained language models have been shown to implicitly encode rich linguistic knowledge which provides an excellent resource for NLG. By virtue of synthesizing these lines of research, we conduct extensive experiments on the benefits of using pre-trained models and discourse relation information in MRs, focusing on the improvement of discourse coherence and correctness. We redesign the Methodius corpus; we also construct another Methodius corpus in which MRs are not hierarchically structured but flat. We report experiments on different versions of the corpora, which probe when, where, and how pre-trained models benefit from MRs with discourse relation information in them. We conclude that discourse relations significantly improve NLG when data is limited.

pdf bib
Self-Training for Compositional Neural NLG in Task-Oriented Dialogue
Xintong Li | Symon Stevens-Guille | Aleksandre Maskharashvili | Michael White
Proceedings of the 14th International Conference on Natural Language Generation

Neural approaches to natural language generation in task-oriented dialogue have typically required large amounts of annotated training data to achieve satisfactory performance, especially when generating from compositional inputs. To address this issue, we show that self-training enhanced with constrained decoding yields large gains in data efficiency on a conversational weather dataset that employs compositional meaning representations. In particular, our experiments indicate that self-training with constrained decoding can enable sequence-to-sequence models to achieve satisfactory quality using vanilla decoding with five to ten times less data than with ordinary supervised baseline; moreover, by leveraging pretrained models, data efficiency can be increased further to fifty times. We confirm the main automatic results with human evaluations and show that they extend to an enhanced, compositional version of the E2E dataset. The end result is an approach that makes it possible to achieve acceptable performance on compositional NLG tasks using hundreds rather than tens of thousands of training samples.

2020

pdf bib
Leveraging Large Pretrained Models for WebNLG 2020
Xintong Li | Aleksandre Maskharashvili | Symon Jory Stevens-Guille | Michael White
Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+)

In this paper, we report experiments on finetuning large pretrained models to realize resource description framework (RDF) triples to natural language. We provide the details of how to build one of the top-ranked English generation models in WebNLG Challenge 2020. We also show that there appears to be considerable potential for reranking to improve the current state of the art both in terms of statistical metrics and model-based metrics. Our human analyses of the generated texts show that for Russian, pretrained models showed some success, both in terms of lexical and morpho-syntactic choices for generation, as well as for content aggregation. Nevertheless, in a number of cases, the model can be unpredictable, both in terms of failure or success. Omissions of the content and hallucinations, which in many cases occurred at the same time, were major problems. By contrast, the models for English showed near perfect performance on the validation set.

pdf bib
Neural NLG for Methodius: From RST Meaning Representations to Texts
Symon Stevens-Guille | Aleksandre Maskharashvili | Amy Isard | Xintong Li | Michael White
Proceedings of the 13th International Conference on Natural Language Generation

While classic NLG systems typically made use of hierarchically structured content plans that included discourse relations as central components, more recent neural approaches have mostly mapped simple, flat inputs to texts without representing discourse relations explicitly. In this paper, we investigate whether it is beneficial to include discourse relations in the input to neural data-to-text generators for texts where discourse relations play an important role. To do so, we reimplement the sentence planning and realization components of a classic NLG system, Methodius, using LSTM sequence-to-sequence (seq2seq) models. We find that although seq2seq models can learn to generate fluent and grammatical texts remarkably well with sufficiently representative Methodius training data, they cannot learn to correctly express Methodius’s similarity and contrast comparisons unless the corresponding RST relations are included in the inputs. Additionally, we experiment with using self-training and reverse model reranking to better handle train/test data mismatches, and find that while these methods help reduce content errors, it remains essential to include discourse relations in the input to obtain optimal performance.

2019

pdf bib
Bayesian Inference Semantics: A Modelling System and A Test Suite
Jean-Philippe Bernardy | Rasmus Blanck | Stergios Chatzikyriakidis | Shalom Lappin | Aleksandre Maskharashvili
Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019)

We present BIS, a Bayesian Inference Semantics, for probabilistic reasoning in natural language. The current system is based on the framework of Bernardy et al. (2018), but departs from it in important respects. BIS makes use of Bayesian learning for inferring a hypothesis from premises. This involves estimating the probability of the hypothesis, given the data supplied by the premises of an argument. It uses a syntactic parser to generate typed syntactic structures that serve as input to a model generation system. Sentences are interpreted compositionally to probabilistic programs, and the corresponding truth values are estimated using sampling methods. BIS successfully deals with various probabilistic semantic phenomena, including frequency adverbs, generalised quantifiers, generics, and vague predicates. It performs well on a number of interesting probabilistic reasoning tasks. It also sustains most classically valid inferences (instantiation, de Morgan’s laws, etc.). To test BIS we have built an experimental test suite with examples of a range of probabilistic and classical inference patterns.

pdf bib
Two experiments for embedding Wordnet hierarchy into vector spaces
Jean-Philippe Bernardy | Aleksandre Maskharashvili
Proceedings of the 10th Global Wordnet Conference

In this paper, we investigate mapping of the WORDNET hyponymy relation to feature vectors. Our aim is to model lexical knowledge in such a way that it can be used as input in generic machine-learning models, such as phrase entailment predictors. We propose two models. The first one leverages an existing mapping of words to feature vectors (fastText), and attempts to classify such vectors as within or outside of each class. The second model is fully supervised, using solely WORDNET as a ground truth. It maps each concept to an interval or a disjunction thereof. The first model approaches but not quite attain state of the art performance. The second model can achieve near-perfect accuracy.

pdf bib
Predicates as Boxes in Bayesian Semantics for Natural Language
Jean-Philippe Bernardy | Rasmus Blanck | Stergios Chatzikyriakidis | Shalom Lappin | Aleksandre Maskharashvili
Proceedings of the 22nd Nordic Conference on Computational Linguistics

In this paper, we present a Bayesian approach to natural language semantics. Our main focus is on the inference task in an environment where judgments require probabilistic reasoning. We treat nouns, verbs, adjectives, etc. as unary predicates, and we model them as boxes in a bounded domain. We apply Bayesian learning to satisfy constraints expressed as premises. In this way we construct a model, by specifying boxes for the predicates. The probability of the hypothesis (the conclusion) is evaluated against the model that incorporates the premises as constraints.

2016

pdf bib
Interfacing Sentential and Discourse TAG-based Grammars
Laurence Danlos | Aleksandre Maskharashvili | Sylvain Pogodalla
Proceedings of the 12th International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+12)

2015

pdf bib
Grammaires phrastiques et discursives fondées sur les TAG : une approche de D-STAG avec les ACG
Laurence Danlos | Aleksandre Maskharashvili | Sylvain Pogodalla
Actes de la 22e conférence sur le Traitement Automatique des Langues Naturelles. Articles longs

Nous présentons une méthode pour articuler grammaire de phrase et grammaire de discours qui évite de recourir à une étape de traitement intermédiaire. Cette méthode est suffisamment générale pour construire des structures discursives qui ne soient pas des arbres mais des graphes orientés acycliques (DAG). Notre analyse s’appuie sur une approche de l’analyse discursive, Discourse Synchronous TAG (D-STAG), qui utilise les Grammaires d’Arbres Adjoint (TAG). Nous utilisons pour ce faire un encodage des TAG dans les Grammaires Catégorielles Abstraites (ACG). Cet encodage permet d’une part d’utiliser l’ordre supérieur pour l’interprétation sémantique afin de construire des structures qui soient des DAG et non des arbres, et d’autre part d’utiliser les propriétés de composition d’ACG pour réaliser naturellement l’interface entre grammaire phrastique et grammaire discursive. Tous les exemples proposés pour illustrer la méthode ont été implantés et peuvent être testés avec le logiciel approprié.

2014

pdf bib
An ACG Analysis of the G-TAG Generation Process
Laurence Danlos | Aleksandre Maskharashvili | Sylvain Pogodalla
Proceedings of the 8th International Natural Language Generation Conference (INLG)

pdf bib
Text Generation: Reexamining G-TAG with Abstract Categorial Grammars (Génération de textes : G-TAG revisité avec les Grammaires Catégorielles Abstraites) [in French]
Laurence Danlos | Aleksandre Maskharashvili | Sylvain Pogodalla
Proceedings of TALN 2014 (Volume 1: Long Papers)

2013

pdf bib
Constituency and Dependency Relationship from a Tree Adjoining Grammar and Abstract Categorial Grammars Perspective
Aleksandre Maskharashvili | Sylvain Pogodalla
Proceedings of the Sixth International Joint Conference on Natural Language Processing