Beyond Contrastive Learning: Synthetic Data Enables List-wise Training with Multiple Levels of Relevance

Reza Esfandiarpoor, George Zerveas, Ruochen Zhang, Macton Mgonzo, Carsten Eickhoff, Stephen Bach


Abstract
Although synthetic data has changed various aspects of information retrieval (IR) pipelines, the main training paradigm remains: contrastive learning with binary relevance labels, where one positive document is compared against several negatives using the InfoNCE loss. This objective treats all documents that are not explicitly annotated as relevant on an equally negative footing, regardless of their actual degree of relevance, thus missing subtle nuances useful for ranking. To overcome this limitation, in this work, we forgo real documents and annotations and use large language models to directly generate synthetic documents that answer the MS MARCO queries according to _several different levels of relevance_. We also propose using Wasserstein distance as a more effective loss function for training transformer-based retrievers with graduated relevance labels. Our experiments on MS MARCO and BEIR benchmark show that our proposed approach outperforms conventional training with InfoNCE by a large margin. Without using any real documents, our method significantly improves self-supervised retrievers and is more robust to distribution shift compared to contrastive learning using real data. Our method also successfully integrates existing real data into the synthetic ranking context, further boosting the performance. Overall, we show that generating multi-level ranking contexts is a better approach to synthetic data generation for IR than just generating the standard positive and negative documents.
Anthology ID:
2025.findings-emnlp.1245
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2025
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
22860–22882
Language:
URL:
https://aclanthology.org/2025.findings-emnlp.1245/
DOI:
Bibkey:
Cite (ACL):
Reza Esfandiarpoor, George Zerveas, Ruochen Zhang, Macton Mgonzo, Carsten Eickhoff, and Stephen Bach. 2025. Beyond Contrastive Learning: Synthetic Data Enables List-wise Training with Multiple Levels of Relevance. In Findings of the Association for Computational Linguistics: EMNLP 2025, pages 22860–22882, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Beyond Contrastive Learning: Synthetic Data Enables List-wise Training with Multiple Levels of Relevance (Esfandiarpoor et al., Findings 2025)
Copy Citation:
PDF:
https://aclanthology.org/2025.findings-emnlp.1245.pdf
Checklist:
 2025.findings-emnlp.1245.checklist.pdf