Exploring Looping Effects in RNN-based Architectures

Andrei Shcherbakov, Saliha Muradoglu, Ekaterina Vylomova


Abstract
The paper investigates repetitive loops, a common problem in contemporary text generation (such as machine translation, language modelling, morphological inflection) systems. More specifically, we conduct a study on neural models with recurrent units by explicitly altering their decoder internal state. We use a task of morphological reinflection task as a proxy to study the effects of the changes. Our results show that the probability of the occurrence of repetitive loops is significantly reduced by introduction of an extra neural decoder output. The output should be specifically trained to produce gradually increasing value upon generation of each character of a given sequence. We also explored variations of the technique and found that feeding the extra output back to the decoder amplifies the positive effects.
Anthology ID:
2020.alta-1.15
Volume:
Proceedings of the The 18th Annual Workshop of the Australasian Language Technology Association
Month:
December
Year:
2020
Address:
Virtual Workshop
Venue:
ALTA
SIG:
Publisher:
Australasian Language Technology Association
Note:
Pages:
115–120
Language:
URL:
https://aclanthology.org/2020.alta-1.15
DOI:
Bibkey:
Cite (ACL):
Andrei Shcherbakov, Saliha Muradoglu, and Ekaterina Vylomova. 2020. Exploring Looping Effects in RNN-based Architectures. In Proceedings of the The 18th Annual Workshop of the Australasian Language Technology Association, pages 115–120, Virtual Workshop. Australasian Language Technology Association.
Cite (Informal):
Exploring Looping Effects in RNN-based Architectures (Shcherbakov et al., ALTA 2020)
Copy Citation:
PDF:
https://aclanthology.org/2020.alta-1.15.pdf