Yang-Yin Lee
2021
Enconter: Entity Constrained Progressive Sequence Generation via Insertion-based Transformer
Lee Hsun Hsieh
|
Yang-Yin Lee
|
Ee-Peng Lim
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
Pretrained using large amount of data, autoregressive language models are able to generate high quality sequences. However, these models do not perform well under hard lexical constraints as they lack fine control of content generation process. Progressive insertion based transformers can overcome the above limitation and efficiently generate a sequence in parallel given some input tokens as constraint. These transformers however may fail to support hard lexical constraints as their generation process is more likely to terminate prematurely. The paper analyses such early termination problems and proposes the ENtity CONstrained insertion TransformER(ENCONTER), a new insertion transformer that addresses the above pitfall without compromising much generation efficiency. We introduce a new training strategy that considers predefined hard lexical constraints (e.g., entities to be included in the generated sequence). Our experiments show that ENCONTER outperforms other baseline models in several performance metrics rendering it more suitable in practical applications.
2020
MSD-1030: A Well-built Multi-Sense Evaluation Dataset for Sense Representation Models
Ting-Yu Yen
|
Yang-Yin Lee
|
Yow-Ting Shiue
|
Hen-Hsen Huang
|
Hsin-Hsi Chen
Proceedings of the Twelfth Language Resources and Evaluation Conference
Sense embedding models handle polysemy by giving each distinct meaning of a word form a separate representation. They are considered improvements over word models, and their effectiveness is usually judged with benchmarks such as semantic similarity datasets. However, most of these datasets are not designed for evaluating sense embeddings. In this research, we show that there are at least six concerns about evaluating sense embeddings with existing benchmark datasets, including the large proportions of single-sense words and the unexpected inferior performance of several multi-sense models to their single-sense counterparts. These observations call into serious question whether evaluations based on these datasets can reflect the sense model’s ability to capture different meanings. To address the issues, we propose the Multi-Sense Dataset (MSD-1030), which contains a high ratio of multi-sense word pairs. A series of analyses and experiments show that MSD-1030 serves as a more reliable benchmark for sense embeddings. The dataset is available at http://nlg.csie.ntu.edu.tw/nlpresource/MSD-1030/.
2018
GenSense: A Generalized Sense Retrofitting Model
Yang-Yin Lee
|
Ting-Yu Yen
|
Hen-Hsen Huang
|
Yow-Ting Shiue
|
Hsin-Hsi Chen
Proceedings of the 27th International Conference on Computational Linguistics
With the aid of recently proposed word embedding algorithms, the study of semantic similarity has progressed and advanced rapidly. However, many natural language processing tasks need sense level representation. To address this issue, some researches propose sense embedding learning algorithms. In this paper, we present a generalized model from existing sense retrofitting model. The generalization takes three major components: semantic relations between the senses, the relation strength and the semantic strength. In the experiment, we show that the generalized model can outperform previous approaches in three types of experiment: semantic relatedness, contextual word similarity and semantic difference.
Search
Co-authors
- Ting-Yu Yen 2
- Yow-Ting Shiue 2
- Hen-Hsen Huang 2
- Hsin-Hsi Chen 2
- Lee Hsun Hsieh 1
- show all...