Rethinking STS and NLI in Large Language Models

Yuxia Wang, Minghan Wang, Preslav Nakov


Abstract
Recent years, have seen the rise of large language models (LLMs), where practitioners use task-specific prompts; this was shown to be effective for a variety of tasks. However, when applied to semantic textual similarity (STS) and natural language inference (NLI), the effectiveness of LLMs turns out to be limited by low-resource domain accuracy, model overconfidence, and difficulty to capture the disagreements between human judgements. With this in mind, here we try to rethink STS and NLI in the era of LLMs. We first evaluate the performance of STS and NLI in the clinical/biomedical domain, and then we assess LLMs’ predictive confidence and their capability of capturing collective human opinions. We find that these old problems are still to be properly addressed in the era of LLMs.
Anthology ID:
2024.findings-eacl.65
Volume:
Findings of the Association for Computational Linguistics: EACL 2024
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
965–982
Language:
URL:
https://aclanthology.org/2024.findings-eacl.65
DOI:
Bibkey:
Cite (ACL):
Yuxia Wang, Minghan Wang, and Preslav Nakov. 2024. Rethinking STS and NLI in Large Language Models. In Findings of the Association for Computational Linguistics: EACL 2024, pages 965–982, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
Rethinking STS and NLI in Large Language Models (Wang et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-eacl.65.pdf