Mode recovery in neural autoregressive sequence modeling

Ilia Kulikov, Sean Welleck, Kyunghyun Cho


Abstract
Despite its wide use, recent studies have revealed unexpected and undesirable properties of neural autoregressive sequence models trained with maximum likelihood, such as an unreasonably high affinity to short sequences after training and to infinitely long sequences at decoding time. We propose to study these phenomena by investigating how the modes, or local maxima, of a distribution are maintained throughout the full learning chain of the ground-truth, empirical, learned and decoding-induced distributions, via the newly proposed mode recovery cost. We design a tractable testbed where we build three types of ground-truth distributions: (1) an LSTM based structured distribution, (2) an unstructured distribution where probability of a sequence does not depend on its content, and (3) a product of these two which we call a semi-structured distribution. Our study reveals both expected and unexpected findings. First, starting with data collection, mode recovery cost strongly relies on the ground-truth distribution and is most costly with the semi-structured distribution. Second, after learning, mode recovery cost from the ground-truth distribution may increase or decrease compared to data collection, with the largest cost degradation occurring with the semi-structured ground-truth distribution. Finally, the ability of the decoding-induced distribution to recover modes from the learned distribution is highly impacted by the choices made earlier in the learning chain. We conclude that future research must consider the entire learning chain in order to fully understand the potentials and perils and to further improve neural autoregressive sequence models.
Anthology ID:
2021.spnlp-1.5
Volume:
Proceedings of the 5th Workshop on Structured Prediction for NLP (SPNLP 2021)
Month:
August
Year:
2021
Address:
Online
Editors:
Zornitsa Kozareva, Sujith Ravi, Andreas Vlachos, Priyanka Agrawal, André Martins
Venue:
spnlp
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
44–52
Language:
URL:
https://aclanthology.org/2021.spnlp-1.5
DOI:
10.18653/v1/2021.spnlp-1.5
Bibkey:
Cite (ACL):
Ilia Kulikov, Sean Welleck, and Kyunghyun Cho. 2021. Mode recovery in neural autoregressive sequence modeling. In Proceedings of the 5th Workshop on Structured Prediction for NLP (SPNLP 2021), pages 44–52, Online. Association for Computational Linguistics.
Cite (Informal):
Mode recovery in neural autoregressive sequence modeling (Kulikov et al., spnlp 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.spnlp-1.5.pdf
Optional supplementary material:
 2021.spnlp-1.5.OptionalSupplementaryMaterial.zip
Video:
 https://aclanthology.org/2021.spnlp-1.5.mp4
Code
 uralik/mode_recovery
Data
WikiText-103WikiText-2