Finetuning Pretrained Transformers into Variational Autoencoders

Seongmin Park, Jihwa Lee


Abstract
Text variational autoencoders (VAEs) are notorious for posterior collapse, a phenomenon where the model’s decoder learns to ignore signals from the encoder. Because posterior collapse is known to be exacerbated by expressive decoders, Transformers have seen limited adoption as components of text VAEs. Existing studies that incorporate Transformers into text VAEs (Li et al., 2020; Fang et al., 2021) mitigate posterior collapse using massive pretraining, a technique unavailable to most of the research community without extensive computing resources. We present a simple two-phase training scheme to convert a sequence-to-sequence Transformer into a VAE with just finetuning. The resulting language model is competitive with massively pretrained Transformer-based VAEs in some internal metrics while falling short on others. To facilitate training we comprehensively explore the impact of common posterior collapse alleviation techniques in the literature. We release our code for reproducability.
Anthology ID:
2021.insights-1.5
Volume:
Proceedings of the Second Workshop on Insights from Negative Results in NLP
Month:
November
Year:
2021
Address:
Online and Punta Cana, Dominican Republic
Editors:
João Sedoc, Anna Rogers, Anna Rumshisky, Shabnam Tafreshi
Venue:
insights
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
29–35
Language:
URL:
https://aclanthology.org/2021.insights-1.5
DOI:
10.18653/v1/2021.insights-1.5
Bibkey:
Cite (ACL):
Seongmin Park and Jihwa Lee. 2021. Finetuning Pretrained Transformers into Variational Autoencoders. In Proceedings of the Second Workshop on Insights from Negative Results in NLP, pages 29–35, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Finetuning Pretrained Transformers into Variational Autoencoders (Park & Lee, insights 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.insights-1.5.pdf
Video:
 https://aclanthology.org/2021.insights-1.5.mp4
Code
 seongminp/transformers-into-vaes
Data
Penn TreebankSNLI