Generating Interpretations of Policy Announcements

Andreas Marfurt, Ashley Thornton, David Sylvan, James Henderson


Abstract
Recent advances in language modeling have focused on (potentially multiple-choice) question answering, open-ended generation, or math and coding problems. We look at a more nuanced task: the interpretation of statements of political actors. To this end, we present a dataset of policy announcements and corresponding annotated interpretations, on the topic of US foreign policy relations with Russia in the years 1993 up to 2016. We analyze the performance of finetuning standard sequence-to-sequence models of varying sizes on predicting the annotated interpretations and compare them to few-shot prompted large language models. We find that 1) model size is not the main factor for success on this task, 2) finetuning smaller models provides both quantitatively and qualitatively superior results to in-context learning with large language models, but 3) large language models pick up the annotation format and approximate the category distribution with just a few in-context examples.
Anthology ID:
2024.nlp4dh-1.50
Volume:
Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities
Month:
November
Year:
2024
Address:
Miami, USA
Editors:
Mika Hämäläinen, Emily Öhman, So Miyagawa, Khalid Alnajjar, Yuri Bizzoni
Venue:
NLP4DH
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
513–520
Language:
URL:
https://aclanthology.org/2024.nlp4dh-1.50
DOI:
Bibkey:
Cite (ACL):
Andreas Marfurt, Ashley Thornton, David Sylvan, and James Henderson. 2024. Generating Interpretations of Policy Announcements. In Proceedings of the 4th International Conference on Natural Language Processing for Digital Humanities, pages 513–520, Miami, USA. Association for Computational Linguistics.
Cite (Informal):
Generating Interpretations of Policy Announcements (Marfurt et al., NLP4DH 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.nlp4dh-1.50.pdf