Stéphane Clinchant

Also published as: Stephane Clinchant


2024

pdf bib
Retrieval-augmented generation in multilingual settings
Nadezhda Chirkova | David Rau | Hervé Déjean | Thibault Formal | Stéphane Clinchant | Vassilina Nikoulina
Proceedings of the 1st Workshop on Towards Knowledgeable Language Models (KnowLLM 2024)

Retrieval-augmented generation (RAG) has recently emerged as a promising solution for incorporating up-to-date or domain-specific knowledge into large language models (LLMs) and improving LLM factuality, but is predominantly studied in English-only settings. In this work, we consider RAG in the multilingual setting (mRAG), i.e. with user queries and the datastore in 13 languages, and investigate which components and with which adjustments are needed to build a well-performing mRAG pipeline, that can be used as a strong baseline in future works. Our findings highlight that despite the availability of high-quality off-the-shelf multilingual retrievers and generators, task-specific prompt engineering is needed to enable generation in user languages. Moreover, current evaluation metrics need adjustments for multilingual setting, to account for variations in spelling named entities. The main limitations to be addressed in future works include frequent code-switching in non-Latin alphabet languages, occasional fluency errors, wrong reading of the provided documents, or irrelevant retrieval. We release the code for the resulting mRAG baseline pipeline at https://github.com/naver/bergen, Documentation: https://github.com/naver/bergen/blob/main/documentations/multilingual.md.

pdf bib
BERGEN: A Benchmarking Library for Retrieval-Augmented Generation
David Rau | Hervé Déjean | Nadezhda Chirkova | Thibault Formal | Shuai Wang | Stéphane Clinchant | Vassilina Nikoulina
Findings of the Association for Computational Linguistics: EMNLP 2024

Retrieval-Augmented Generation allows to enhance Large Language Models with external knowledge. In response to the recent popularity of generative LLMs, many RAG approaches have been proposed, which involve an intricate number of different configurations such as evaluation datasets, collections, metrics, retrievers, and LLMs. Inconsistent benchmarking poses a major challenge in comparing approaches and understanding the impact of each component in the pipeline. In this work, we study best practices that lay the groundwork for a systematic evaluation of RAG and present BERGEN, an end-to-end library for reproducible research standardizing RAG experiments. In an extensive study focusing on QA, we benchmark different state-of-the-art retrievers, rerankers, and LLMs. Additionally, we analyze existing RAG metrics and datasets.

pdf bib
Retrieval Evaluation for Long-Form and Knowledge-Intensive Image–Text Article Composition
Jheng-Hong Yang | Carlos Lassance | Rafael S. Rezende | Krishna Srinivasan | Stéphane Clinchant | Jimmy Lin
Proceedings of the First Workshop on Advancing Natural Language Processing for Wikipedia

This paper examines the integration of images into Wikipedia articles by evaluating image–text retrieval tasks in multimedia content creation, focusing on developing retrieval-augmented tools to enhance the creation of high-quality multimedia articles. Despite ongoing research, the interplay between text and visuals, such as photos and diagrams, remains underexplored, limiting support for real-world applications. We introduce AToMiC, a dataset for long-form, knowledge-intensive image–text retrieval, detailing its task design, evaluation protocols, and relevance criteria.Our findings show that a hybrid approach combining a sparse retriever with a dense retriever achieves satisfactory effectiveness, with nDCG@10 scores around 0.4 for Image Suggestion and Image Promotion tasks, providing insights into the challenges of retrieval evaluation in an image–text interleaved article composition context.The AToMiC dataset is available at https://github.com/TREC-AToMiC/AToMiC.

2021

pdf bib
Efficient Inference for Multilingual Neural Machine Translation
Alexandre Berard | Dain Lee | Stephane Clinchant | Kweonwoo Jung | Vassilina Nikoulina
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Multilingual NMT has become an attractive solution for MT deployment in production. But to match bilingual quality, it comes at the cost of larger and slower models. In this work, we consider several ways to make multilingual NMT faster at inference without degrading its quality. We experiment with several “light decoder” architectures in two 20-language multi-parallel settings: small-scale on TED Talks and large-scale on ParaCrawl. Our experiments demonstrate that combining a shallow decoder with vocabulary filtering leads to almost 2 times faster inference with no loss in translation quality. We validate our findings with BLEU and chrF (on 380 language pairs), robustness evaluation and human evaluation.

2019

pdf bib
On the use of BERT for Neural Machine Translation
Stephane Clinchant | Kweon Woo Jung | Vassilina Nikoulina
Proceedings of the 3rd Workshop on Neural Generation and Translation

Exploiting large pretrained models for various NMT tasks have gained a lot of visibility recently. In this work we study how BERT pretrained models could be exploited for supervised Neural Machine Translation. We compare various ways to integrate pretrained BERT model with NMT model and study the impact of the monolingual data used for BERT training on the final translation quality. We use WMT-14 English-German, IWSLT15 English-German and IWSLT14 English-Russian datasets for these experiments. In addition to standard task test set evaluation, we perform evaluation on out-of-domain test sets and noise injected test sets, in order to assess how BERT pretrained representations affect model robustness.

2016

pdf bib
A Domain Adaptation Regularization for Denoising Autoencoders
Stéphane Clinchant | Gabriela Csurka | Boris Chidlovskii
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

pdf bib
Transductive Adaptation of Black Box Predictions
Stéphane Clinchant | Boris Chidlovskii | Gabriela Csurka
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2013

pdf bib
Aggregating Continuous Word Embeddings for Information Retrieval
Stéphane Clinchant | Florent Perronnin
Proceedings of the Workshop on Continuous Vector Space Models and their Compositionality