Kuan-Yu Chen

Also published as: Kuan-yu Chen


2025

pdf bib
Creativity in LLM-based Multi-Agent Systems: A Survey
Yi-Cheng Lin | Kang-Chieh Chen | Zhe-Yan Li | Tzu-Heng Wu | Tzu-Hsuan Wu | Kuan-Yu Chen | Hung-yi Lee | Yun-Nung Chen
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Large language model (LLM)-driven multi-agent systems (MAS) are transforming how humans and AIs collaboratively generate ideas and artifacts. While existing surveys provide comprehensive overviews of MAS infrastructures, they largely overlook the dimension of creativity, including how novel outputs are generated and evaluated, how creativity informs agent personas, and how creative workflows are coordinated. This is the first survey dedicated to creativity in MAS. We focus on text and image generation tasks, and present:(1) a taxonomy of agent proactivity and persona design;(2) an overview of generation techniques, including divergent exploration, iterative refinement, and collaborative synthesis, as well as relevant datasets and evaluation metrics; and(3) a discussion of key challenges, such as inconsistent evaluation standards, insufficient bias mitigation, coordination conflicts, and the lack of unified benchmarks.This survey offers a structured framework and roadmap for advancing the development, evaluation, and standardization of creative MAS.

pdf bib
Training a Chinese Listenability Model Using Word2Vec to Predict the Difficulty of Spoken Texts
Yen-Hsiang Chien | Hou-Chiang Tseng | Kuan-Yu Chen | Yao-Ting Sung
Proceedings of the 37th Conference on Computational Linguistics and Speech Processing (ROCLING 2025)

With the proliferation of digital learning, an increasing number of learners are engaging with audio-visual materials. For preschool and lower elementary students, whose literacy skills are still limited, knowledge acquisition relies more heavily on spoken and visual content. Traditional readability models were primarily developed for written texts, and their applicability to spoken materials remains uncertain. To address this issue, this study investigates the impact of different word segmentation tools and language models on the performance of automatic grade classification models for Chinese spoken materials. Support Vector Machines were employed for grade prediction, aiming to automatically determine the appropriate grade level of learning resources and assist learners in selecting suitable materials. The results show that language models with higher-dimensional word embeddings achieved better classification performance, with an accuracy of up to 61% and an adjacent accuracy of 76%. These findings may contribute to future digital learning platforms or educational resource recommendation systems by automatically providing students with appropriate listening materials to enhance learning outcomes.

pdf bib
Toward Traditional Chinese ModernBERT: A Preliminary Study
Yi-En Chen | Qiao-Ying He | Kuan-Yu Chen
Proceedings of the 37th Conference on Computational Linguistics and Speech Processing (ROCLING 2025)

This study employs several state-of-the-art techniques, including RoPE and Flash Attention, and leverages large-scale Chinese web corpora and encyclopedic data to pre-train an encoder model specifically designed for long text in Traditional Chinese. We evaluate the model on tasks such as reading comprehension and text classification, and the results show that its overall performance lags behind existing Chinese benchmarks. Through pseudo-perplexity analysis, we infer that the pre-training phase did not sufficiently capture the data distribution, potentially due to factors such as hyperparameters, convergence, and data quality. Although the results are suboptimal, this study still offers valuable experimental insights and directions for improving Chinese language model development.

pdf bib
Cross-user Collaborative and Sequential Modeling for Recommendation
Qiao-Ying He | Yi-En Chen | Kuan-Yu Chen
Proceedings of the 37th Conference on Computational Linguistics and Speech Processing (ROCLING 2025)

Multi-behavior recommendation leverages auxiliary behaviors to effectively alleviate the sparsity of target behaviors. Existing approaches can be broadly categorized into two paradigms: sequential models that capture individual temporal dynamics but often omit cross-user information, and graph-based models that mine collaborative patterns yet lack temporal dependency modeling. To address these limitations, this paper proposes an integrated approach that combines sequential and graph modeling: the former focuses on learning temporal dependencies within user behavior sequences, while the latter captures cross-user behavior paths. By fusing the predictions from both components, the method achieves more accurate recommendations. Experiments on two e-commerce datasets, Taobao and RetailRocket, show that the integrated model outperforms the strong baseline MB-STR by about 1% in both HR@10 and NDCG@10. These results indicate that incorporating cross-user collaborative information consistently improves performance, even on top of strong sequential models.

pdf bib
Hakka Speech Recognition with Whisper and Pinyin Post-processing for FSR-2025
Chia-Hsin Lee | Yung-Jun Chang | Jin-Yan Wu | Kuan-Yu Chen
Proceedings of the 37th Conference on Computational Linguistics and Speech Processing (ROCLING 2025)

本研究為參加 FSR-2025 客語語音辨識挑戰賽(Hakka ASR II)的技術報告,旨在推進客語自動語音辨識技術的發展。由於客語屬於低資源語言,且存在多種腔調,語音辨識面臨高度挑戰。我們以 Whisperlarge-v2 為骨幹模型,設計兩階段訓練流程:首先利用「Hakka Across Taiwan(HAT)」語料庫進行模型調適,以捕捉客語的一般聲學特徵;其次在賽事方提供的60 小時腔調語料上進行微調,以增強對目標資料的適應性。實驗發現,直接輸出客語漢字可達到良好的字錯率(CER),但由 於腔調差異與拼音規則變化多,拼音任務表現顯著下降。為解決此問題,我們以漢字模型的編碼器初始化拼音模型,並提出結合 RoBERTa 漢字轉拼音、腔調判斷與字典修正的後處理模組,期望可以在比賽中提升辨識的成效。

2023

pdf bib
WordRank: A Word Ranking based Training Strategy for Abstractive Document Summarization
Hsiao-Wei Chou | Ping-Yen Wu | Jia-Jang Tu | Kuan-yu Chen
Proceedings of the 35th Conference on Computational Linguistics and Speech Processing (ROCLING 2023)

2022

pdf bib
Chinese Movie Dialogue Question Answering Dataset
Shang-Bao Luo | Cheng-Chung Fan | Kuan-Yu Chen | Yu Tsao | Hsin-Min Wang | Keh-Yih Su
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)

This paper constructs a Chinese dialogue-based information-seeking question answering dataset CMDQA, which is mainly applied to the scenario of getting Chinese movie related information. It contains 10K QA dialogs (40K turns in total). All questions and background documents are compiled from the Wikipedia via an Internet crawler. The answers to the questions are obtained via extracting the corresponding answer spans within the related text passage. In CMDQA, in addition to searching related documents, pronouns are also added to the question to better mimic the real dialog scenario. This dataset can test the individual performance of the information retrieval, the question answering and the question re-writing modules. This paper also provides a baseline system and shows its performance on this dataset. The experiments elucidate that it still has a big gap to catch the human performance. This dataset thus provides enough challenge for the researcher to conduct related research.

2021

pdf bib
2020福爾摩沙臺語語音辨識比賽之初步實驗 (A Preliminary Study of Formosa Speech Recognition Challenge 2020 – Taiwanese ASR)
Fu-Hao Yu | Ke-Han Lu | Yi-Wei Wang | Wei-Zhe Chang | Wei-Kai Huang | Kuan-Yu Chen
International Journal of Computational Linguistics & Chinese Language Processing, Volume 26, Number 1, June 2021

pdf bib
Proceedings of the 33rd Conference on Computational Linguistics and Speech Processing (ROCLING 2021)
Lung-Hao Lee | Chia-Hui Chang | Kuan-Yu Chen
Proceedings of the 33rd Conference on Computational Linguistics and Speech Processing (ROCLING 2021)

pdf bib
A Flexible and Extensible Framework for Multiple Answer Modes Question Answering
Cheng-Chung Fan | Chia-Chih Kuo | Shang-Bao Luo | Pei-Jun Liao | Kuang-Yu Chang | Chiao-Wei Hsu | Meng-Tse Wu | Shih-Hong Tsai | Tzu-Man Wu | Aleksandra Smolka | Chao-Chun Liang | Hsin-Min Wang | Kuan-Yu Chen | Yu Tsao | Keh-Yih Su
Proceedings of the 33rd Conference on Computational Linguistics and Speech Processing (ROCLING 2021)

This paper presents a framework to answer the questions that require various kinds of inference mechanisms (such as Extraction, Entailment-Judgement, and Summarization). Most of the previous approaches adopt a rigid framework which handles only one inference mechanism. Only a few of them adopt several answer generation modules for providing different mechanisms; however, they either lack an aggregation mechanism to merge the answers from various modules, or are too complicated to be implemented with neural networks. To alleviate the problems mentioned above, we propose a divide-and-conquer framework, which consists of a set of various answer generation modules, a dispatch module, and an aggregation module. The answer generation modules are designed to provide different inference mechanisms, the dispatch module is used to select a few appropriate answer generation modules to generate answer candidates, and the aggregation module is employed to select the final answer. We test our framework on the 2020 Formosa Grand Challenge Contest dataset. Experiments show that the proposed framework outperforms the state-of-the-art Roberta-large model by about 11.4%.

pdf bib
A BERT-based Siamese-structured Retrieval Model
Hung-Yun Chiang | Kuan-Yu Chen
Proceedings of the 33rd Conference on Computational Linguistics and Speech Processing (ROCLING 2021)

Due to the development of deep learning, the natural language processing tasks have made great progresses by leveraging the bidirectional encoder representations from Transformers (BERT). The goal of information retrieval is to search the most relevant results for the user’s query from a large set of documents. Although BERT-based retrieval models have shown excellent results in many studies, these models usually suffer from the need for large amounts of computations and/or additional storage spaces. In view of the flaws, a BERT-based Siamese-structured retrieval model (BESS) is proposed in this paper. BESS not only inherits the merits of pre-trained language models, but also can generate extra information to compensate the original query automatically. Besides, the reinforcement learning strategy is introduced to make the model more robust. Accordingly, we evaluate BESS on three public-available corpora, and the experimental results demonstrate the efficiency of the proposed retrieval model.

pdf bib
ntust-nlp-1 at ROCLING-2021 Shared Task: Educational Texts Dimensional Sentiment Analysis using Pretrained Language Models
Yi-Wei Wang | Wei-Zhe Chang | Bo-Han Fang | Yi-Chia Chen | Wei-Kai Huang | Kuan-Yu Chen
Proceedings of the 33rd Conference on Computational Linguistics and Speech Processing (ROCLING 2021)

This technical report aims at the ROCLING 2021 Shared Task: Dimensional Sentiment Analysis for Educational Texts. In order to predict the affective states of Chinese educational texts, we present a practical framework by employing pre-trained language models, such as BERT and MacBERT. Several valuable observations and analyses can be drawn from a series of experiments. From the results, we find that MacBERT-based methods can deliver better results than BERT-based methods on the verification set. Therefore, we average the prediction results of several models obtained using different settings as the final output.

pdf bib
ntust-nlp-2 at ROCLING-2021 Shared Task: BERT-based semantic analyzer with word-level information
Ke-Han Lu | Kuan-Yu Chen
Proceedings of the 33rd Conference on Computational Linguistics and Speech Processing (ROCLING 2021)

In this paper, we proposed a BERT-based dimensional semantic analyzer, which is designed by incorporating with word-level information. Our model achieved three of the best results in four metrics on “ROCLING 2021 Shared Task: Dimensional Sentiment Analysis for Educational Texts”. We conducted a series of experiments to compare the effectiveness of different pre-trained methods. Besides, the results also proofed that our method can significantly improve the performances than classic methods. Based on the experiments, we also discussed the impact of model architectures and datasets.

2020

pdf bib
International Journal of Computational Linguistics & Chinese Language Processing, Volume 25, Number 2, December 2020
Lung-Hao Lee | Kuan-Yu Chen
International Journal of Computational Linguistics & Chinese Language Processing, Volume 25, Number 2, December 2020

pdf bib
A Preliminary Study on Using Meta-learning Technique for Information Retrieval
Chong-En Lin | Kuan-Yu Chen
Proceedings of the 32nd Conference on Computational Linguistics and Speech Processing (ROCLING 2020)

pdf bib
A Preliminary Study on Leveraging Meta Learning Technique for Code-switching Speech Recognition
Fu-Hao Yu | Kuan-Yu Chen
Proceedings of the 32nd Conference on Computational Linguistics and Speech Processing (ROCLING 2020)

2019

pdf bib
基於特徵粒度之訓練策略於中文口語問答系統之應用 (A Feature-granularity Training Strategy for Chinese Spoken Question Answering)
Shang-Bao Luo | Kuan-Yu Chen
International Journal of Computational Linguistics & Chinese Language Processing, Volume 24, Number 2, December 2019

pdf bib
EBSUM: 基於 BERT 的強健性抽取式摘要法 (EBSUM: An Enhanced BERT-based Extractive Summarization Framework)
Zheng-Yu Wu | Kuan-Yu Chen
International Journal of Computational Linguistics & Chinese Language Processing, Volume 24, Number 2, December 2019

pdf bib
基於特徵粒度之訓練策略於中文口語問答系統之應用(A Feature-granularity Training Strategy for Chinese Spoken Question Answering)
Shang-Bao Luo | Kuan-Yu Chen
Proceedings of the 31st Conference on Computational Linguistics and Speech Processing (ROCLING 2019)

pdf bib
新穎的序列生成架構於中文重寫式摘要之研究(Novel Sequence Generation Framework for Chinese Abstractive Summarization)
Chin-Yueh Chien | Kuan-Yu Chen
Proceedings of the 31st Conference on Computational Linguistics and Speech Processing (ROCLING 2019)

pdf bib
EBSUM: 基於BERT 的強健性抽取式摘要法(EBSUM: An Enhanced BERT-based Extractive Summarization Framework)
Zheng-Yu Wu | Kuan-Yu Chen
Proceedings of the 31st Conference on Computational Linguistics and Speech Processing (ROCLING 2019)

pdf bib
GALs: 基於對抗式學習之整列式摘要法 (GALs: A GAN-based Listwise Summarizer)
Chia-Chih Kuo | Kuan-Yu Chen
Proceedings of the 31st Conference on Computational Linguistics and Speech Processing (ROCLING 2019)

2018

pdf bib
未登錄詞之向量表示法模型於中文機器閱讀理解之應用 (An OOV Word Embedding Framework for Chinese Machine Reading Comprehension)
Shang-Bao Luo | Ching-Hsien Lee | Jia-Jang Tu | Kuan-Yu Chen
International Journal of Computational Linguistics & Chinese Language Processing, Volume 23, Number 2, December 2018

pdf bib
未登錄詞之向量表示法模型於中文機器閱讀理解之應用 (An OOV Word Embedding Framework for Chinese Machine Reading Comprehension) [In Chinese]
Shang-Bao Luo | Ching-Hsien Lee | Kuan-Yu Chen
Proceedings of the 30th Conference on Computational Linguistics and Speech Processing (ROCLING 2018)

2017

pdf bib
使用查詢意向探索與類神經網路於語音文件檢索之研究 (Exploring Query Intent and Neural Network modeling Techniques for Spoken Document Retrieval) [In Chinese]
Tien-Hong Lo | Ying-Wen Chen | Berlin Chen | Kuan-Yu Chen | Hsin-Min Wang
Proceedings of the 29th Conference on Computational Linguistics and Speech Processing (ROCLING 2017)

pdf bib
當代非監督式方法之比較於節錄式語音摘要 (An Empirical Comparison of Contemporary Unsupervised Approaches for Extractive Speech Summarization) [In Chinese]
Shih-Hung Liu | Kuan-Yu Chen | Kai-Wun Shih | Berlin Chen | Hsin-Min Wang | Wen-Lian Hsu
International Journal of Computational Linguistics & Chinese Language Processing, Volume 22, Number 1, June 2017

pdf bib
語音文件檢索使用類神經網路技術 (On the Use of Neural Network Modeling Techniques for Spoken Document Retrieval) [In Chinese]
Tien-Hong Lo | Ying-Wen Chen | Kuan-Yu Chen | Hsin-Min Wang | Berlin Chen
International Journal of Computational Linguistics & Chinese Language Processing, Volume 22, Number 2, December 2017-Special Issue on Selected Papers from ROCLING XXIX

2016

pdf bib
Learning to Distill: The Essence Vector Modeling Framework
Kuan-Yu Chen | Shih-Hung Liu | Berlin Chen | Hsin-Min Wang
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

In the context of natural language processing, representation learning has emerged as a newly active research subject because of its excellent performance in many applications. Learning representations of words is a pioneering study in this school of research. However, paragraph (or sentence and document) embedding learning is more suitable/reasonable for some tasks, such as sentiment classification and document summarization. Nevertheless, as far as we are aware, there is only a dearth of research focusing on launching unsupervised paragraph embedding methods. Classic paragraph embedding methods infer the representation of a given paragraph by considering all of the words occurring in the paragraph. Consequently, those stop or function words that occur frequently may mislead the embedding learning process to produce a misty paragraph representation. Motivated by these observations, our major contributions are twofold. First, we propose a novel unsupervised paragraph embedding method, named the essence vector (EV) model, which aims at not only distilling the most representative information from a paragraph but also excluding the general background information to produce a more informative low-dimensional vector representation for the paragraph. We evaluate the proposed EV model on benchmark sentiment classification and multi-document summarization tasks. The experimental results demonstrate the effectiveness and applicability of the proposed embedding method. Second, in view of the increasing importance of spoken content processing, an extension of the EV model, named the denoising essence vector (D-EV) model, is proposed. The D-EV model not only inherits the advantages of the EV model but also can infer a more robust representation for a given spoken paragraph against imperfect speech recognition. The utility of the D-EV model is evaluated on a spoken document summarization task, confirming the effectiveness of the proposed embedding method in relation to several well-practiced and state-of-the-art summarization methods.

pdf bib
融合多任務學習類神經網路聲學模型訓練於會議語音辨識之研究(Leveraging Multi-task Learning with Neural Network Based Acoustic Modeling for Improved Meeting Speech Recognition) [In Chinese]
Ming-Han Yang | Yao-Chi Hsu | Hsiao-Tsung Hung | Ying-Wen Chen | Berlin Chen | Kuan-Yu Chen
Proceedings of the 28th Conference on Computational Linguistics and Speech Processing (ROCLING 2016)

pdf bib
運用序列到序列生成架構於重寫式自動摘要(Exploiting Sequence-to-Sequence Generation Framework for Automatic Abstractive Summarization)[In Chinese]
Yu-Lun Hsieh | Shih-Hung Liu | Kuan-Yu Chen | Hsin-Min Wang | Wen-Lian Hsu | Berlin Chen
Proceedings of the 28th Conference on Computational Linguistics and Speech Processing (ROCLING 2016)

pdf bib
評估尺度相關最佳化方法於華語錯誤發音檢測之研究 (Evaluation Metric-related Optimization Methods for Mandarin Mispronunciation Detection) [In Chinese]
Yao-Chi Hsu | Ming-Han Yang | Hsiao-Tsung Hung | Yi-Ju Lin | Kuan-Yu Chen | Berlin Chen
International Journal of Computational Linguistics & Chinese Language Processing, Volume 21, Number 2, December 2016

pdf bib
融合多任務學習類神經網路聲學模型訓練於會議語音辨識之研究 (Leveraging Multi-Task Learning with Neural Network Based Acoustic Modeling for Improved Meeting Speech Recognition) [In Chinese]
Ming-Han Yang | Yao-Chi Hsu | Hsiao-Tsung Hung | Ying-Wen Chen | Kuan-Yu Chen | Berlin Chen
International Journal of Computational Linguistics & Chinese Language Processing, Volume 21, Number 2, December 2016

2015

pdf bib
表示法學習技術於節錄式語音文件摘要之研究(A Study on Representation Learning Techniques for Extractive Spoken Document Summarization) [In Chinese]
Kai-Wun Shih | Berlin Chen | Kuan-Yu Chen | Shih-Hung Liu | Hsin-Min Wang
Proceedings of the 27th Conference on Computational Linguistics and Speech Processing (ROCLING 2015)

pdf bib
使用詞向量表示與概念資訊於中文大詞彙連續語音辨識之語言模型調適(Exploring Word Embedding and Concept Information for Language Model Adaptation in Mandarin Large Vocabulary Continuous Speech Recognition) [In Chinese]
Ssu-Cheng Chen | Kuan-Yu Chen | Hsiao-Tsung Hung | Berlin Chen
Proceedings of the 27th Conference on Computational Linguistics and Speech Processing (ROCLING 2015)

pdf bib
可讀性預測於中小學國語文教科書及優良課外讀物之研究(A Study of Readability Prediction on Elementary and Secondary Chinese Textbooks and Excellent Extracurricular Reading Materials) [In Chinese]
Yi-Nian Liu | Kuan-Yu Chen | Hou-Chiang Tseng | Berlin Chen
Proceedings of the 27th Conference on Computational Linguistics and Speech Processing (ROCLING 2015)

pdf bib
調變頻譜分解之改良於強健性語音辨識(Several Refinements of Modulation Spectrum Factorization for Robust Speech Recognition) [In Chinese]
Ting-Hao Chang | Hsiao-Tsung Hung | Kuan-Yu Chen | Hsin-Min Wang | Berlin Chen
Proceedings of the 27th Conference on Computational Linguistics and Speech Processing (ROCLING 2015)

pdf bib
節錄式語音文件摘要使用表示法學習技術 (Extractive Spoken Document Summarization with Representation Learning Techniques) [In Chinese]
Kai-Wun Shih | Kuan-Yu Chen | Shih-Hung Liu | Hsin-Min Wang | Berlin Chen
International Journal of Computational Linguistics & Chinese Language Processing, Volume 20, Number 2, December 2015 - Special Issue on Selected Papers from ROCLING XXVII

pdf bib
調變頻譜分解技術於強健語音辨識之研究 (Investigating Modulation Spectrum Factorization Techniques for Robust Speech Recognition) [In Chinese]
Ting-Hao Chang | Hsiao-Tsung Hung | Kuan-Yu Chen | Hsin-Min Wang | Berlin Chen
International Journal of Computational Linguistics & Chinese Language Processing, Volume 20, Number 2, December 2015 - Special Issue on Selected Papers from ROCLING XXVII

pdf bib
Proceedings of the ACL-IJCNLP 2015 Student Research Workshop
Kuan-Yu Chen | Angelina Ivanova | Ellie Pavlick | Emily Bender | Chin-Yew Lin | Stephan Oepen
Proceedings of the ACL-IJCNLP 2015 Student Research Workshop

2014

pdf bib
Leveraging Effective Query Modeling Techniques for Speech Recognition and Summarization
Kuan-Yu Chen | Shih-Hung Liu | Berlin Chen | Ea-Ee Jan | Hsin-Min Wang | Wen-Lian Hsu | Hsin-Hsi Chen
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

pdf bib
探究新穎語句模型化技術於節錄式語音摘要 (Investigating Novel Sentence Modeling Techniques for Extractive Speech Summarization) [In Chinese]
Shih-Hung Liu | Kuan-Yu Chen | Yu-Lun Hsieh | Berlin Chen | Hsin-Min Wang | Wen-Lian Hsu
Proceedings of the 26th Conference on Computational Linguistics and Speech Processing (ROCLING 2014)

2013

pdf bib
Semantic Naïve Bayes Classifier for Document Classification
How Jing | Yu Tsao | Kuan-Yu Chen | Hsin-Min Wang
Proceedings of the Sixth International Joint Conference on Natural Language Processing

pdf bib
改良語句模型技術於節錄式語音摘要之研究 (Improved Sentence Modeling Techniques for Extractive Speech Summarization) [In Chinese]
Shih-Hung Liu | Kuan-Yu Chen | Hsin-Min Wang | Wen-Lian Hsu | Berlin Chen
Proceedings of the 25th Conference on Computational Linguistics and Speech Processing (ROCLING 2013)

pdf bib
A Study of Language Modeling for Chinese Spelling Check
Kuan-Yu Chen | Hung-Shin Lee | Chung-Han Lee | Hsin-Min Wang | Hsin-Hsi Chen
Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing

2011

pdf bib
實證探究多種鑑別式語言模型於語音辨識之研究 (Empirical Comparisons of Various Discriminative Language Models for Speech Recognition) [In Chinese]
Min-Hsuan Lai | Bang-Xuan Huang | Kuan-Yu Chen | Berlin Chen
Proceedings of the 23rd Conference on Computational Linguistics and Speech Processing (ROCLING 2011)

2009

pdf bib
主題語言模型於大詞彙連續語音辨識之研究 (On the Use of Topic Models for Large-Vocabulary Continuous Speech Recognition) [In Chinese]
Kuan-Yu Chen | Berlin Chen
Proceedings of the 21st Conference on Computational Linguistics and Speech Processing