Fengyu Cai


2024

pdf bib
MixGR: Enhancing Retriever Generalization for Scientific Domain through Complementary Granularity
Fengyu Cai | Xinran Zhao | Tong Chen | Sihao Chen | Hongming Zhang | Iryna Gurevych | Heinz Koeppl
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

pdf bib
GeoHard: Towards Measuring Class-wise Hardness through Modelling Class Semantics
Fengyu Cai | Xinran Zhao | Hongming Zhang | Iryna Gurevych | Heinz Koeppl
Findings of the Association for Computational Linguistics: ACL 2024

pdf bib
A Survey of Confidence Estimation and Calibration in Large Language Models
Jiahui Geng | Fengyu Cai | Yuxia Wang | Heinz Koeppl | Preslav Nakov | Iryna Gurevych
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)

Large language models (LLMs) have demonstrated remarkable capabilities across a wide range of tasks in various domains. Despite their impressive performance, they can be unreliable due to factual errors in their generations. Assessing their confidence and calibrating them across different tasks can help mitigate risks and enable LLMs to produce better generations. There has been a lot of recent research aiming to address this, but there has been no comprehensive overview to organize it and to outline the main lessons learned. The present survey aims to bridge this gap. In particular, we outline the challenges and we summarize recent technical advancements for LLM confidence estimation and calibration. We further discuss their applications and suggest promising directions for future work.

2021

pdf bib
Self-training Improves Pre-training for Few-shot Learning in Task-oriented Dialog Systems
Fei Mi | Wanhao Zhou | Lingjing Kong | Fengyu Cai | Minlie Huang | Boi Faltings
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

As the labeling cost for different modules in task-oriented dialog (ToD) systems is expensive, a major challenge is to train different modules with the least amount of labeled data. Recently, large-scale pre-trained language models, have shown promising results for few-shot learning in ToD. In this paper, we devise a self-training approach to utilize the abundant unlabeled dialog data to further improve state-of-the-art pre-trained models in few-shot learning scenarios for ToD systems. Specifically, we propose a self-training approach that iteratively labels the most confident unlabeled data to train a stronger Student model. Moreover, a new text augmentation technique (GradAug) is proposed to better train the Student by replacing non-crucial tokens using a masked language model. We conduct extensive experiments and present analyses on four downstream tasks in ToD, including intent classification, dialog state tracking, dialog act prediction, and response selection. Empirical results demonstrate that the proposed self-training approach consistently improves state-of-the-art pre-trained models (BERT, ToD-BERT) when only a small number of labeled data are available.