Chen Fang
2018
Speeding up Context-based Sentence Representation Learning with Non-autoregressive Convolutional Decoding
Shuai Tang
|
Hailin Jin
|
Chen Fang
|
Zhaowen Wang
|
Virginia de Sa
Proceedings of the Third Workshop on Representation Learning for NLP
We propose an asymmetric encoder-decoder structure, which keeps an RNN as the encoder and has a CNN as the decoder, and the model only explores the subsequent context information as the supervision. The asymmetry in both model architecture and training pair reduces a large amount of the training time. The contribution of our work is summarized as 1. We design experiments to show that an autoregressive decoder or an RNN decoder is not necessary for the encoder-decoder type of models in terms of learning sentence representations, and based on our results, we present 2 findings. 2. The two interesting findings lead to our final model design, which has an RNN encoder and a CNN decoder, and it learns to encode the current sentence and decode the subsequent contiguous words all at once. 3. With a suite of techniques, our model performs good on downstream tasks and can be trained efficiently on a large unlabelled corpus.
2017
Rethinking Skip-thought: A Neighborhood based Approach
Shuai Tang
|
Hailin Jin
|
Chen Fang
|
Zhaowen Wang
|
Virginia de Sa
Proceedings of the 2nd Workshop on Representation Learning for NLP
We study the skip-thought model with neighborhood information as weak supervision. More specifically, we propose a skip-thought neighbor model to consider the adjacent sentences as a neighborhood. We train our skip-thought neighbor model on a large corpus with continuous sentences, and then evaluate the trained model on 7 tasks, which include semantic relatedness, paraphrase detection, and classification benchmarks. Both quantitative comparison and qualitative investigation are conducted. We empirically show that, our skip-thought neighbor model performs as well as the skip-thought model on evaluation tasks. In addition, we found that, incorporating an autoencoder path in our model didn’t aid our model to perform better, while it hurts the performance of the skip-thought model.
Search