UPPAM: A Unified Pre-training Architecture for Political Actor Modeling based on Language

Xinyi Mou, Zhongyu Wei, Qi Zhang, Xuanjing Huang


Abstract
Modeling political actors is at the core of quantitative political science. Existing works have incorporated contextual information to better learn the representation of political actors for specific tasks through graph models. However, they are limited to the structure and objective of training settings and can not be generalized to all politicians and other tasks. In this paper, we propose a Unified Pre-training Architecture for Political Actor Modeling based on language (UPPAM). In UPPAM, we aggregate statements to represent political actors and learn the mapping from languages to representation, instead of learning the representation of particular persons. We further design structure-aware contrastive learning and behavior-driven contrastive learning tasks, to inject multidimensional information in the political context into the mapping. In this framework, we can profile political actors from different aspects and solve various downstream tasks. Experimental results demonstrate the effectiveness and capability of generalization of our method.
Anthology ID:
2023.acl-long.670
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11996–12012
Language:
URL:
https://aclanthology.org/2023.acl-long.670
DOI:
10.18653/v1/2023.acl-long.670
Bibkey:
Cite (ACL):
Xinyi Mou, Zhongyu Wei, Qi Zhang, and Xuanjing Huang. 2023. UPPAM: A Unified Pre-training Architecture for Political Actor Modeling based on Language. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11996–12012, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
UPPAM: A Unified Pre-training Architecture for Political Actor Modeling based on Language (Mou et al., ACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.acl-long.670.pdf
Video:
 https://aclanthology.org/2023.acl-long.670.mp4