Large Language Models Are No Longer Shallow Parsers

Yuanhe Tian, Fei Xia, Yan Song


Abstract
The development of large language models (LLMs) brings significant changes to the field of natural language processing (NLP), enabling remarkable performance in various high-level tasks, such as machine translation, question-answering, dialogue generation, etc., under end-to-end settings without requiring much training data. Meanwhile, fundamental NLP tasks, particularly syntactic parsing, are also essential for language study as well as evaluating the capability of LLMs for instruction understanding and usage. In this paper, we focus on analyzing and improving the capability of current state-of-the-art LLMs on a classic fundamental task, namely constituency parsing, which is the representative syntactic task in both linguistics and natural language processing. We observe that these LLMs are effective in shallow parsing but struggle with creating correct full parse trees. To improve the performance of LLMs on deep syntactic parsing, we propose a three-step approach that firstly prompts LLMs for chunking, then filters out low-quality chunks, and finally adds the remaining chunks to prompts to instruct LLMs for parsing, with later enhancement by chain-of-thought prompting. Experimental results on English and Chinese benchmark datasets demonstrate the effectiveness of our approach on improving LLMs’ performance on constituency parsing.
Anthology ID:
2024.acl-long.384
Volume:
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Lun-Wei Ku, Andre Martins, Vivek Srikumar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7131–7142
Language:
URL:
https://aclanthology.org/2024.acl-long.384
DOI:
Bibkey:
Cite (ACL):
Yuanhe Tian, Fei Xia, and Yan Song. 2024. Large Language Models Are No Longer Shallow Parsers. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7131–7142, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Large Language Models Are No Longer Shallow Parsers (Tian et al., ACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.acl-long.384.pdf