Prompting Language Models for Linguistic Structure

Terra Blevins, Hila Gonen, Luke Zettlemoyer


Abstract
Although pretrained language models (PLMs) can be prompted to perform a wide range of language tasks, it remains an open question how much this ability comes from generalizable linguistic understanding versus surface-level lexical patterns. To test this, we present a structured prompting approach for linguistic structured prediction tasks, allowing us to perform zero- and few-shot sequence tagging with autoregressive PLMs. We evaluate this approach on part-of-speech tagging, named entity recognition, and sentence chunking, demonstrating strong few-shot performance in all cases. We also find that while PLMs contain significant prior knowledge of task labels due to task leakage into the pretraining corpus, structured prompting can also retrieve linguistic structure with arbitrary labels. These findings indicate that the in-context learning ability and linguistic knowledge of PLMs generalizes beyond memorization of their training data.
Anthology ID:
2023.acl-long.367
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6649–6663
Language:
URL:
https://aclanthology.org/2023.acl-long.367
DOI:
10.18653/v1/2023.acl-long.367
Bibkey:
Cite (ACL):
Terra Blevins, Hila Gonen, and Luke Zettlemoyer. 2023. Prompting Language Models for Linguistic Structure. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6649–6663, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Prompting Language Models for Linguistic Structure (Blevins et al., ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-long.367.pdf
Video:
 https://aclanthology.org/2023.acl-long.367.mp4