Cyril Labbé

Also published as: Cyril Labbe


2023

pdf bib
NanoNER: Named Entity Recognition for Nanobiology Using Experts’ Knowledge and Distant Supervision
Ran Cheng | Martin Lentschat | Cyril Labbe
Proceedings of the Second Workshop on Information Extraction from Scientific Publications

pdf bib
Detection of Tortured Phrases in Scientific Literature
Eléna Martel | Martin Lentschat | Cyril Labbe
Proceedings of the Second Workshop on Information Extraction from Scientific Publications

2022

pdf bib
Investigating the detection of Tortured Phrases in Scientific Literature
Puthineath Lay | Martin Lentschat | Cyril Labbe
Proceedings of the Third Workshop on Scholarly Document Processing

With the help of online tools, unscrupulous authors can today generate a pseudo-scientific article and attempt to publish it. Some of these tools work by replacing or paraphrasing existing texts to produce new content, but they have a tendency to generate nonsensical expressions. A recent study introduced the concept of “tortured phrase”, an unexpected odd phrase that appears instead of the fixed expression. E.g. counterfeit consciousness instead of artificial intelligence. The present study aims at investigating how tortured phrases, that are not yet listed, can be detected automatically. We conducted several experiments, including non-neural binary classification, neural binary classification and cosine similarity comparison of the phrase tokens, yielding noticeable results.

pdf bib
Citation Context Classification: Critical vs Non-critical
Sonita Te | Amira Barhoumi | Martin Lentschat | Frédérique Bordignon | Cyril Labbé | François Portet
Proceedings of the Third Workshop on Scholarly Document Processing

Recently, there have been numerous research in Natural Language Processing on citation analysis in scientific literature. Studies of citation behavior aim at finding how researchers cited a paper in their work. In this paper, we are interested in identifying cited papers that are criticized. Recent research introduces the concept of Critical citations which provides a useful theoretical framework, making criticism an important part of scientific progress. Indeed, identifying critics could be a way to spot errors and thus encourage self-correction of science. In this work, we investigate how to automatically classify the critical citation contexts using Natural Language Processing (NLP). Our classification task consists of predicting critical or non-critical labels for citation contexts. For this, we experiment and compare different methods, including rule-based and machine learning methods, to classify critical vs. non-critical citation contexts. Our experiments show that fine-tuning pretrained transformer model RoBERTa achieved the highest performance among all systems.

pdf bib
Overview of the DAGPap22 Shared Task on Detecting Automatically Generated Scientific Papers
Yury Kashnitsky | Drahomira Herrmannova | Anita de Waard | George Tsatsaronis | Catriona Catriona Fennell | Cyril Labbe
Proceedings of the Third Workshop on Scholarly Document Processing

This paper provides an overview of the DAGPap22 shared task on the detection of automatically generated scientific papers at the Scholarly Document Process workshop colocated with COLING. We frame the detection problem as a binary classification task: given an excerpt of text, label it as either human-written or machine-generated. We shared a dataset containing excerpts from human-written papers as well as artificially generated content and suspicious documents collected by Elsevier publishing and editorial teams. As a test set, the participants are provided with a 5x larger corpus of openly accessible human-written as well as generated papers from the same scientific domains of documents. The shared task saw 180 submissions across 14 participating teams and resulted in two published technical reports. We discuss our findings from the shared task in this overview paper.

2020

pdf bib
Controllable Neural Natural Language Generation: comparison of state-of-the-art control strategies
Yuanmin Leng | François Portet | Cyril Labbé | Raheel Qader
Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+)

Most NLG systems target text fluency and grammatical correctness, disregarding control over text structure and length. However, control over the output plays an important part in industrial NLG applications. In this paper, we study different strategies of control in triple-totext generation systems particularly from the aspects of text structure and text length. Regarding text structure, we present an approach that relies on aligning the input entities with the facts in the target side. It makes sure that the order and the distribution of entities in both the input and the text are the same. As for control over text length, we show two different approaches. One is to supply length constraint as input while the other is to force the end-ofsentence tag to be included at each step when using top-k decoding strategy. Finally, we propose four metrics to assess the degree to which these methods will affect a NLG system’s ability to control text structure and length. Our analyses demonstrate that all the methods enhance the system’s ability with a slight decrease in text fluency. In addition, constraining length at the input level performs much better than control at decoding level.

pdf bib
Seq2SeqPy: A Lightweight and Customizable Toolkit for Neural Sequence-to-Sequence Modeling
Raheel Qader | François Portet | Cyril Labbe
Proceedings of the Twelfth Language Resources and Evaluation Conference

We present Seq2SeqPy a lightweight toolkit for sequence-to-sequence modeling that prioritizes simplicity and ability to customize the standard architectures easily. The toolkit supports several known architectures such as Recurrent Neural Networks, Pointer Generator Networks, and transformer model. We evaluate the toolkit on two datasets and we show that the toolkit performs similarly or even better than a very widely used sequence-to-sequence toolkit.

2019

pdf bib
Fine-Grained Control of Sentence Segmentation and Entity Positioning in Neural NLG
Kritika Mehta | Raheel Qader | Cyril Labbe | François Portet
Proceedings of the 1st Workshop on Discourse Structure in Neural NLG

The move from pipeline Natural Language Generation (NLG) approaches to neural end-to-end approaches led to a loss of control in sentence planning operations owing to the conflation of intermediary micro-planning stages into a single model. Such control is highly necessary when the text should be tailored to respect some constraints such as which entity to be mentioned first, the entity position, the complexity of sentences, etc. In this paper, we introduce fine-grained control of sentence planning in neural data-to-text generation models at two levels - realization of input entities in desired sentences and realization of the input entities in the desired position among individual sentences. We show that by augmenting the input with explicit position identifiers, the neural model can achieve a great control over the output structure while keeping the naturalness of the generated text intact. Since sentence level metrics are not entirely suitable to evaluate this task, we used a metric specific to our task that accounts for the model’s ability to achieve control. The results demonstrate that the position identifiers do constraint the neural model to respect the intended output structure which can be useful in a variety of domains that require the generated text to be in a certain structure.

pdf bib
Semi-Supervised Neural Text Generation by Joint Learning of Natural Language Generation and Natural Language Understanding Models
Raheel Qader | François Portet | Cyril Labbé
Proceedings of the 12th International Conference on Natural Language Generation

In Natural Language Generation (NLG), End-to-End (E2E) systems trained through deep learning have recently gained a strong interest. Such deep models need a large amount of carefully annotated data to reach satisfactory performance. However, acquiring such datasets for every new NLG application is a tedious and time-consuming task. In this paper, we propose a semi-supervised deep learning scheme that can learn from non-annotated data and annotated data when available. It uses a NLG and a Natural Language Understanding (NLU) sequence-to-sequence models which are learned jointly to compensate for the lack of annotation. Experiments on two benchmark datasets show that, with limited amount of annotated data, the method can achieve very competitive results while not using any pre-processing or re-scoring tricks. These findings open the way to the exploitation of non-annotated datasets which is the current bottleneck for the E2E NLG system development to new applications.

2018

pdf bib
Generation of Company descriptions using concept-to-text and text-to-text deep models: dataset collection and systems evaluation
Raheel Qader | Khoder Jneid | François Portet | Cyril Labbé
Proceedings of the 11th International Conference on Natural Language Generation

In this paper we study the performance of several state-of-the-art sequence-to-sequence models applied to generation of short company descriptions. The models are evaluated on a newly created and publicly available company dataset that has been collected from Wikipedia. The dataset consists of around 51K company descriptions that can be used for both concept-to-text and text-to-text generation tasks. Automatic metrics and human evaluation scores computed on the generated company descriptions show promising results despite the difficulty of the task as the dataset (like most available datasets) has not been originally designed for machine learning. In addition, we perform correlation analysis between automatic metrics and human evaluations and show that certain automatic metrics are more correlated to human judgments.

2015

pdf bib
A Personal Storytelling about Your Favorite Data
Cyril Labbé | Claudia Roncancio | Damien Bras
Proceedings of the 15th European Workshop on Natural Language Generation (ENLG)