Yu-Lun Hsieh


2019

pdf bib
MONPA:中文命名實體及斷詞與詞性同步標註系統(MONPA: A Multitask Chinese Segmentation, Named-entity and Part-of-speech Annotator)
Wen-Chao Yeh | Yu-Lun Hsieh | Yung-Chun Chang | Wen-Lian Hsu
Proceedings of the 31st Conference on Computational Linguistics and Speech Processing (ROCLING 2019)

pdf bib
On the Robustness of Self-Attentive Models
Yu-Lun Hsieh | Minhao Cheng | Da-Cheng Juan | Wei Wei | Wen-Lian Hsu | Cho-Jui Hsieh
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

This work examines the robustness of self-attentive neural networks against adversarial input perturbations. Specifically, we investigate the attention and feature extraction mechanisms of state-of-the-art recurrent neural networks and self-attentive architectures for sentiment analysis, entailment and machine translation under adversarial attacks. We also propose a novel attack algorithm for generating more natural adversarial examples that could mislead neural models but not humans. Experimental results show that, compared to recurrent neural models, self-attentive models are more robust against adversarial perturbation. In addition, we provide theoretical explanations for their superior robustness to support our claims.

2017

pdf bib
MONPA: Multi-objective Named-entity and Part-of-speech Annotator for Chinese using Recurrent Neural Network
Yu-Lun Hsieh | Yung-Chun Chang | Yi-Jie Huang | Shu-Hao Yeh | Chun-Hung Chen | Wen-Lian Hsu
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

Part-of-speech (POS) tagging and named entity recognition (NER) are crucial steps in natural language processing. In addition, the difficulty of word segmentation places additional burden on those who intend to deal with languages such as Chinese, and pipelined systems often suffer from error propagation. This work proposes an end-to-end model using character-based recurrent neural network (RNN) to jointly accomplish segmentation, POS tagging and NER of a Chinese sentence. Experiments on previous word segmentation and NER datasets show that a single model with the proposed architecture is comparable to those trained specifically for each task, and outperforms freely-available softwares. Moreover, we provide a web-based interface for the public to easily access this resource.

pdf bib
Identifying Protein-protein Interactions in Biomedical Literature using Recurrent Neural Networks with Long Short-Term Memory
Yu-Lun Hsieh | Yung-Chun Chang | Nai-Wen Chang | Wen-Lian Hsu
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

In this paper, we propose a recurrent neural network model for identifying protein-protein interactions in biomedical literature. Experiments on two largest public benchmark datasets, AIMed and BioInfer, demonstrate that our approach significantly surpasses state-of-the-art methods with relative improvements of 10% and 18%, respectively. Cross-corpus evaluation also demonstrate that the proposed model remains robust despite using different training data. These results suggest that RNN can effectively capture semantic relationships among proteins as well as generalizes over different corpora, without any feature engineering.

pdf bib
CIAL at IJCNLP-2017 Task 2: An Ensemble Valence-Arousal Analysis System for Chinese Words and Phrases
Zheng-Wen Lin | Yung-Chun Chang | Chen-Ann Wang | Yu-Lun Hsieh | Wen-Lian Hsu
Proceedings of the IJCNLP 2017, Shared Tasks

Sentiment lexicon is very helpful in dimensional sentiment applications. Because of countless Chinese words, developing a method to predict unseen Chinese words is required. The proposed method can handle both words and phrases by using an ADVWeight List for word prediction, which in turn improves our performance at phrase level. The evaluation results demonstrate that our system is effective in dimensional sentiment analysis for Chinese phrases. The Mean Absolute Error (MAE) and Pearson’s Correlation Coefficient (PCC) for Valence are 0.723 and 0.835, respectively, and those for Arousal are 0.914 and 0.756, respectively.

2016

pdf bib
How Do I Look? Publicity Mining From Distributed Keyword Representation of Socially Infused News Articles
Yu-Lun Hsieh | Yung-Chun Chang | Chun-Han Chu | Wen-Lian Hsu
Proceedings of The Fourth International Workshop on Natural Language Processing for Social Media

pdf bib
運用序列到序列生成架構於重寫式自動摘要(Exploiting Sequence-to-Sequence Generation Framework for Automatic Abstractive Summarization)[In Chinese]
Yu-Lun Hsieh | Shih-Hung Liu | Kuan-Yu Chen | Hsin-Min Wang | Wen-Lian Hsu | Berlin Chen
Proceedings of the 28th Conference on Computational Linguistics and Speech Processing (ROCLING 2016)

2015

pdf bib
Linguistic Template Extraction for Recognizing Reader-Emotion and Emotional Resonance Writing Assistance
Yung-Chun Chang | Cen-Chieh Chen | Yu-Lun Hsieh | Chien Chin Chen | Wen-Lian Hsu
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

2014

pdf bib
Semantic Frame-based Statistical Approach for Topic Detection
Yung-Chun Chang | Yu-Lun Hsieh | Cen-Chieh Chen | Chad Liu | Chun-Hung Lu | Wen-Lian Hsu
Proceedings of the 28th Pacific Asia Conference on Language, Information and Computing

pdf bib
探究新穎語句模型化技術於節錄式語音摘要 (Investigating Novel Sentence Modeling Techniques for Extractive Speech Summarization) [In Chinese]
Shih-Hung Liu | Kuan-Yu Chen | Yu-Lun Hsieh | Berlin Chen | Hsin-Min Wang | Wen-Lian Hsu
Proceedings of the 26th Conference on Computational Linguistics and Speech Processing (ROCLING 2014)

2013

pdf bib
Sinica-IASL Chinese spelling check system at Sighan-7
Ting-Hao Yang | Yu-Lun Hsieh | Yu-Hsuan Chen | Michael Tsang | Cheng-Wei Shih | Wen-Lian Hsu
Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing