CopyNext: Explicit Span Copying and Alignment in Sequence to Sequence Models

Abhinav Singh, Patrick Xia, Guanghui Qin, Mahsa Yarmohammadi, Benjamin Van Durme


Abstract
Copy mechanisms are employed in sequence to sequence (seq2seq) models to generate reproductions of words from the input to the output. These frameworks, operating at the lexical type level, fail to provide an explicit alignment that records where each token was copied from. Further, they require contiguous token sequences from the input (spans) to be copied individually. We present a model with an explicit token-level copy operation and extend it to copying entire spans. Our model provides hard alignments between spans in the input and output, allowing for nontraditional applications of seq2seq, like information extraction. We demonstrate the approach on Nested Named Entity Recognition, achieving near state-of-the-art accuracy with an order of magnitude increase in decoding speed.
Anthology ID:
2020.spnlp-1.2
Volume:
Proceedings of the Fourth Workshop on Structured Prediction for NLP
Month:
November
Year:
2020
Address:
Online
Editors:
Priyanka Agrawal, Zornitsa Kozareva, Julia Kreutzer, Gerasimos Lampouras, André Martins, Sujith Ravi, Andreas Vlachos
Venue:
spnlp
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11–16
Language:
URL:
https://aclanthology.org/2020.spnlp-1.2
DOI:
10.18653/v1/2020.spnlp-1.2
Bibkey:
Cite (ACL):
Abhinav Singh, Patrick Xia, Guanghui Qin, Mahsa Yarmohammadi, and Benjamin Van Durme. 2020. CopyNext: Explicit Span Copying and Alignment in Sequence to Sequence Models. In Proceedings of the Fourth Workshop on Structured Prediction for NLP, pages 11–16, Online. Association for Computational Linguistics.
Cite (Informal):
CopyNext: Explicit Span Copying and Alignment in Sequence to Sequence Models (Singh et al., spnlp 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.spnlp-1.2.pdf
Optional supplementary material:
 2020.spnlp-1.2.OptionalSupplementaryMaterial.pdf
Video:
 https://slideslive.com/38940142