Modeling Quantification and Scope in Abstract Meaning Representations

James Pustejovsky, Ken Lai, Nianwen Xue


Abstract
In this paper, we propose an extension to Abstract Meaning Representations (AMRs) to encode scope information of quantifiers and negation, in a way that overcomes the semantic gaps of the schema while maintaining its cognitive simplicity. Specifically, we address three phenomena not previously part of the AMR specification: quantification, negation (generally), and modality. The resulting representation, which we call “Uniform Meaning Representation” (UMR), adopts the predicative core of AMR and embeds it under a “scope” graph when appropriate. UMR representations differ from other treatments of quantification and modal scope phenomena in two ways: (a) they are more transparent; and (b) they specify default scope when possible.‘
Anthology ID:
W19-3303
Volume:
Proceedings of the First International Workshop on Designing Meaning Representations
Month:
August
Year:
2019
Address:
Florence, Italy
Venues:
ACL | DMR | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
28–33
Language:
URL:
https://aclanthology.org/W19-3303
DOI:
10.18653/v1/W19-3303
Bibkey:
Cite (ACL):
James Pustejovsky, Ken Lai, and Nianwen Xue. 2019. Modeling Quantification and Scope in Abstract Meaning Representations. In Proceedings of the First International Workshop on Designing Meaning Representations, pages 28–33, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
Modeling Quantification and Scope in Abstract Meaning Representations (Pustejovsky et al., 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-3303.pdf