Challenging Large Language Models with New Tasks: A Study on their Adaptability and Robustness

Chenxi Li, Yuanhe Tian, Zhaxi Zerong, Yan Song, Fei Xia


Abstract
Recent progress in large language models (LLMs) has marked a notable milestone in the field of artificial intelligence. The conventional evaluation of LLMs primarily relies on existing tasks and benchmarks, raising concerns about test set contamination and the genuine comprehension abilities of LLMs. To address these concerns, we propose to evaluate LLMs by designing new tasks, automatically generating evaluation datasets for the tasks, and conducting detailed error analyses to scrutinize LLMs’ adaptability to new tasks, their sensitivity to prompt variations, and their error tendencies. We investigate the capacity of LLMs to adapt to new but simple tasks, especially when they diverge from the models’ pre-existing knowledge. Our methodology emphasizes the creation of straightforward tasks, facilitating a precise error analysis to uncover the underlying causes of LLM failures. This strategic approach also aims to uncover effective strategies for enhancing LLM performance based on the detailed error analysis of system output.
Anthology ID:
2024.findings-acl.485
Volume:
Findings of the Association for Computational Linguistics ACL 2024
Month:
August
Year:
2024
Address:
Bangkok, Thailand and virtual meeting
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
8140–8162
Language:
URL:
https://aclanthology.org/2024.findings-acl.485
DOI:
Bibkey:
Cite (ACL):
Chenxi Li, Yuanhe Tian, Zhaxi Zerong, Yan Song, and Fei Xia. 2024. Challenging Large Language Models with New Tasks: A Study on their Adaptability and Robustness. In Findings of the Association for Computational Linguistics ACL 2024, pages 8140–8162, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Challenging Large Language Models with New Tasks: A Study on their Adaptability and Robustness (Li et al., Findings 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.findings-acl.485.pdf