Multi-Task Inference: Can Large Language Models Follow Multiple Instructions at Once?

Guijin Son, SangWon Baek, Sangdae Nam, Ilgyun Jeong, Seungone Kim


Abstract
Large language models (LLMs) are typically prompted to follow a single instruction per inference call. In this work, we analyze whether LLMs also hold the capability to handle multiple instructions simultaneously, denoted as Multi-Task Inference. For this purpose, we introduce the MTI Bench (Multi-Task Inference Benchmark), a comprehensive evaluation benchmark encompassing 5,000 instances across 25 tasks. Each task in the MTI Bench involves 2 to 3 sub-tasks. As expected, we first demonstrate that Multi-Task Inference reduces the total inference time by × 1.46 times in average since it does not require multiple inference calls. Interestingly, contrary to the expectation that LLMs would perform better when tasks are divided, we find that state-of-the-art LLMs, such as Llama-2-Chat-70B and GPT-4, show up to 7.3% and 12.4% improved performance with Multi-Task Inference compared to Single-Task Inference on the MTI Bench. We release the MTI Bench dataset and our code at this [link](https://anonymous.4open.science/r/MTI-Bench-6F01).
Anthology ID:
2024.acl-long.304
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5606–5627
Language:
URL:
https://aclanthology.org/2024.acl-long.304
DOI:
10.18653/v1/2024.acl-long.304
Bibkey:
Cite (ACL):
Guijin Son, SangWon Baek, Sangdae Nam, Ilgyun Jeong, and Seungone Kim. 2024. Multi-Task Inference: Can Large Language Models Follow Multiple Instructions at Once?. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5606–5627, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Multi-Task Inference: Can Large Language Models Follow Multiple Instructions at Once? (Son et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.304.pdf