Yuchang Cheng


2025

pdf bib
SubRegWeigh: Effective and Efficient Annotation Weighing with Subword Regularization
Kohei Tsuji | Tatsuya Hiraoka | Yuchang Cheng | Tomoya Iwakura
Proceedings of the 31st International Conference on Computational Linguistics

NLP datasets may still contain annotation errors, even when they are manually annotated. Researchers have attempted to develop methods to automatically reduce the adverse effect of errors in datasets. However, existing methods are time-consuming because they require many trained models to detect errors. This paper proposes a time-saving method that utilizes a tokenization technique called subword regularization to simulate multiple error detection models for detecting errors. Our proposed method, SubRegWeigh, can perform annotation weighting four to five times faster than the existing method. Additionally, SubRegWeigh improved performance in document classification and named entity recognition tasks. In experiments with pseudo-incorrect labels, SubRegWeigh clearly identifies pseudo-incorrect labels as annotation errors. Our code is available at https://github.com/4ldk/SubRegWeigh.

2022

pdf bib
Sharing Parameter by Conjugation for Knowledge Graph Embeddings in Complex Space
Xincan Feng | Zhi Qu | Yuchang Cheng | Taro Watanabe | Nobuhiro Yugami
Proceedings of TextGraphs-16: Graph-based Methods for Natural Language Processing

A Knowledge Graph (KG) is the directed graphical representation of entities and relations in the real world. KG can be applied in diverse Natural Language Processing (NLP) tasks where knowledge is required. The need to scale up and complete KG automatically yields Knowledge Graph Embedding (KGE), a shallow machine learning model that is suffering from memory and training time consumption issues. To mitigate the computational load, we propose a parameter-sharing method, i.e., using conjugate parameters for complex numbers employed in KGE models. Our method improves memory efficiency by 2x in relation embedding while achieving comparable performance to the state-of-the-art non-conjugate models, with faster, or at least comparable, training time. We demonstrated the generalizability of our method on two best-performing KGE models 5E (CITATION) and ComplEx (CITATION) on five benchmark datasets.

2014

pdf bib
Detecting the Untranslatable Colloquial Expressions of Japanese Verbs in Cross-Language Instant Messaging
Yuchang Cheng | Masaru Fuji | Tomoki Nagase | Minoru Uegaki | Isaac Okada
Proceedings of the 28th Pacific Asia Conference on Language, Information and Computing

2012

pdf bib
An Example-Based Japanese Proofreading System for Offshore Development
Yuchang Cheng | Tomoki Nagase
Proceedings of COLING 2012: Demonstration Papers

2008

pdf bib
Use of Event Types for Temporal Relation Identification in Chinese Text
Yuchang Cheng | Masayuki Asahara | Yuji Matsumoto
Proceedings of the Sixth SIGHAN Workshop on Chinese Language Processing

pdf bib
Constructing a Temporal Relation Tagged Corpus of Chinese Based on Dependency Structure Analysis
Yuchang Cheng | Masayuki Asahara | Yuji Matsumoto
International Journal of Computational Linguistics & Chinese Language Processing, Volume 13, Number 2, June 2008

2007

pdf bib
NAIST.Japan: Temporal Relation Identification Using Dependency Parsed Tree
Yuchang Cheng | Masayuki Asahara | Yuji Matsumoto
Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)

2006

pdf bib
Multi-lingual Dependency Parsing at NAIST
Yuchang Cheng | Masayuki Asahara | Yuji Matsumoto
Proceedings of the Tenth Conference on Computational Natural Language Learning (CoNLL-X)

pdf bib
The Construction of a Dictionary for a Two-layer Chinese Morphological Analyzer
Chooi-Ling Goh | Jia Lü | Yuchang Cheng | Masayuki Asahara | Yuji Matsumoto
Proceedings of the 20th Pacific Asia Conference on Language, Information and Computation

2005

pdf bib
Chinese Deterministic Dependency Analyzer: Examining Effects of Global Features and Root Node Finder
Yuchang Cheng | Masayuki Asahara | Yuji Matsumoto
Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing