Maarten De Raedt


2023

pdf bib
IDAS: Intent Discovery with Abstractive Summarization
Maarten De Raedt | Fréderic Godin | Thomas Demeester | Chris Develder
Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)

Intent discovery is the task of inferring latent intents from a set of unlabeled utterances, and is a useful step towards the efficient creation of new conversational agents. We show that recent competitive methods in intent discovery can be outperformed by clustering utterances based on abstractive summaries, i.e., “labels”, that retain the core elements while removing non-essential information. We contribute the IDAS approach, which collects a set of descriptive utterance labels by prompting a Large Language Model, starting from a well-chosen seed set of prototypical utterances, to bootstrap an In-Context Learning procedure to generate labels for non-prototypical utterances. The utterances and their resulting noisy labels are then encoded by a frozen pre-trained encoder, and subsequently clustered to recover the latent intents. For the unsupervised task (without any intent labels) IDAS outperforms the state-of-the-art by up to +7.42% in standard cluster metrics for the Banking, StackOverflow, and Transport datasets. For the semi-supervised task (with labels for a subset of intents) IDAS surpasses 2 recent methods on the CLINC benchmark without even using labeled data.

pdf bib
Zero-Shot Cross-Lingual Sentiment Classification under Distribution Shift: an Exploratory Study
Maarten De Raedt | Semere Kiros Bitew | Fréderic Godin | Thomas Demeester | Chris Develder
Proceedings of the 3rd Workshop on Multi-lingual Representation Learning (MRL)

2022

pdf bib
Robustifying Sentiment Classification by Maximally Exploiting Few Counterfactuals
Maarten De Raedt | Fréderic Godin | Chris Develder | Thomas Demeester
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

For text classification tasks, finetuned language models perform remarkably well. Yet, they tend to rely on spurious patterns in training data, thus limiting their performance on out-of-distribution (OOD) test data. Among recent models aiming to avoid this spurious pattern problem, adding extra counterfactual samples to the training data has proven to be very effective. Yet, counterfactual data generation is costly since it relies on human annotation. Thus, we propose a novel solution that only requires annotation of a small fraction (e.g., 1%) of the original training data, and uses automatic generation of extra counterfactuals in an encoding vector space. We demonstrate the effectiveness of our approach in sentiment classification, using IMDb data for training and other sets for OOD tests (i.e., Amazon, SemEval and Yelp). We achieve noticeable accuracy improvements by adding only 1% manual counterfactuals: +3% compared to adding +100% in-distribution training samples, +1.3% compared to alternate counterfactual approaches.

2021

pdf bib
A Simple Geometric Method for Cross-Lingual Linguistic Transformations with Pre-trained Autoencoders
Maarten De Raedt | Fréderic Godin | Pieter Buteneers | Chris Develder | Thomas Demeester
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Powerful sentence encoders trained for multiple languages are on the rise. These systems are capable of embedding a wide range of linguistic properties into vector representations. While explicit probing tasks can be used to verify the presence of specific linguistic properties, it is unclear whether the vector representations can be manipulated to indirectly steer such properties. For efficient learning, we investigate the use of a geometric mapping in embedding space to transform linguistic properties, without any tuning of the pre-trained sentence encoder or decoder. We validate our approach on three linguistic properties using a pre-trained multilingual autoencoder and analyze the results in both monolingual and cross-lingual settings.