Yu-Hsiang Tseng


2024

pdf bib
The Semantic Relations in LLMs: An Information-theoretic Compression Approach
Yu-Hsiang Tseng | Pin-Er Chen | Da-Chen Lian | Shu-Kai Hsieh
Proceedings of the Workshop: Bridging Neurons and Symbols for Natural Language Processing and Knowledge Graphs Reasoning (NeusymBridge) @ LREC-COLING-2024

Compressibility is closely related to the predictability of the texts from the information theory viewpoint. As large language models (LLMs) are trained to maximize the conditional probabilities of upcoming words, they may capture the subtlety and nuances of the semantic constraints underlying the texts, and texts aligning with the encoded semantic constraints are more compressible than those that do not. This paper systematically tests whether and how LLMs can act as compressors of semantic pairs. Using semantic relations from English and Chinese Wordnet, we empirically demonstrate that texts with correct semantic pairings are more compressible than incorrect ones, measured by the proposed compression advantages index. We also show that, with the Pythia model suite and a fine-tuned model on Chinese Wordnet, compression capacities are modulated by the model’s seen data. These findings are consistent with the view that LLMs encode the semantic knowledge as underlying constraints learned from texts and can act as compressors of semantic information or potentially other structured knowledge.

2023

pdf bib
Exploring Affordance and Situated Meaning in Image Captions: A Multimodal Analysis
Pin-Er Chen | Po-Ya Angela Wang | Hsin-Yu Chou | Yu-Hsiang Tseng | Shu-Kai Hsieh
Proceedings of the 37th Pacific Asia Conference on Language, Information and Computation

pdf bib
Vec2Gloss: definition modeling leveraging contextualized vectors with Wordnet gloss
Yu-Hsiang Tseng | Mao-Chang Ku | Wei-Ling Chen | Yu-Lin Chang | Shu-Kai Hsieh
Proceedings of the 37th Pacific Asia Conference on Language, Information and Computation

pdf bib
Lexical Retrieval Hypothesis in Multimodal Context
Po-Ya Angela Wang | Pin-Er Chen | Hsin-Yu Chou | Yu-Hsiang Tseng | Shu-Kai Hsieh
Proceedings of the 4th Conference on Language, Data and Knowledge

2022

pdf bib
Analyzing Discourse Functions with Acoustic Features and Phone mbeddings: Non-lexical Items in Taiwan Mandarin
Pin-Er Chen | Yu-Hsiang Tseng | Chi-Wei Wang | Fang-Chi Yeh | Shu-Kai Hsieh
International Journal of Computational Linguistics & Chinese Language Processing, Volume 27, Number 2, December 2022

pdf bib
Analyzing discourse functions with acoustic features and phone embeddings: non-lexical items in Taiwan Mandarin
Pin-Er Chen | Yu-Hsiang Tseng | Chi-Wei Wang | Fang-Chi Yeh | Shu-Kai Hsieh
Proceedings of the 34th Conference on Computational Linguistics and Speech Processing (ROCLING 2022)

Non-lexical items are expressive devices used in conversations that are not words but are nevertheless meaningful. These items play crucial roles, such as signaling turn-taking or marking stances in interactions. However, as the non-lexical items do not stably correspond to written or phonological forms, past studies tend to focus on studying their acoustic properties, such as pitches and durations. In this paper, we investigate the discourse functions of non-lexical items through their acoustic properties and the phone embeddings extracted from a deep learning model. Firstly, we create a non-lexical item dataset based on the interpellation video clips from Taiwan’s Legislative Yuan. Then, we manually identify the non-lexical items and their discourse functions in the videos. Next, we analyze the acoustic properties of those items through statistical modeling and building classifiers based on phone embeddings extracted from a phone recognition model. We show that (1) the discourse functions have significant effects on the acoustic features; and (2) the classifiers built on phone embeddings perform better than the ones on conventional acoustic properties. These results suggest that phone embeddings may reflect the phonetic variations crucial in differentiating the discourse functions of non-lexical items.

pdf bib
CxLM: A Construction and Context-aware Language Model
Yu-Hsiang Tseng | Cing-Fang Shih | Pin-Er Chen | Hsin-Yu Chou | Mao-Chang Ku | Shu-Kai Hsieh
Proceedings of the Thirteenth Language Resources and Evaluation Conference

Constructions are direct form-meaning pairs with possible schematic slots. These slots are simultaneously constrained by the embedded construction itself and the sentential context. We propose that the constraint could be described by a conditional probability distribution. However, as this conditional probability is inevitably complex, we utilize language models to capture this distribution. Therefore, we build CxLM, a deep learning-based masked language model explicitly tuned to constructions’ schematic slots. We first compile a construction dataset consisting of over ten thousand constructions in Taiwan Mandarin. Next, an experiment is conducted on the dataset to examine to what extent a pretrained masked language model is aware of the constructions. We then fine-tune the model specifically to perform a cloze task on the opening slots. We find that the fine-tuned model predicts masked slots more accurately than baselines and generates both structurally and semantically plausible word samples. Finally, we release CxLM and its dataset as publicly available resources and hope to serve as new quantitative tools in studying construction grammar.

pdf bib
Character Jacobian: Modeling Chinese Character Meanings with Deep Learning Model
Yu-Hsiang Tseng | Shu-Kai Hsieh
Proceedings of the 29th International Conference on Computational Linguistics

Compounding, a prevalent word-formation process, presents an interesting challenge for computational models. Indeed, the relations between compounds and their constituents are often complicated. It is particularly so in Chinese morphology, where each character is almost simultaneously bound and free when treated as a morpheme. To model such word-formation process, we propose the Notch (NOnlinear Transformation of CHaracter embeddings) model and the character Jacobians. The Notch model first learns the non-linear relations between the constituents and words, and the character Jacobians further describes the character’s role in each word. In a series of experiments, we show that the Notch model predicts the embeddings of the real words from their constituents but helps account for the behavioral data of the pseudowords. Moreover, we also demonstrated that character Jacobians reflect the characters’ meanings. Taken together, the Notch model and character Jacobians may provide a new perspective on studying the word-formation process and morphology with modern deep learning.

2021

pdf bib
What confuses BERT? Linguistic Evaluation of Sentiment Analysis on Telecom Customer Opinion
Cing-Fang Shih | Yu-Hsiang Tseng | Ching-Wen Yang | Pin-Er Chen | Hsin-Yu Chou | Lian-Hui Tan | Tzu-Ju Lin | Chun-Wei Wang | Shu-Kai Hsieh
Proceedings of the 33rd Conference on Computational Linguistics and Speech Processing (ROCLING 2021)

Ever-expanding evaluative texts on online forums have become an important source of sentiment analysis. This paper proposes an aspect-based annotated dataset consisting of telecom reviews on social media. We introduce a category, implicit evaluative texts, impevals for short, to investigate how the deep learning model works on these implicit reviews. We first compare two models, BertSimple and BertImpvl, and find that while both models are competent to learn simple evaluative texts, they are confused when classifying impevals. To investigate the factors underlying the correctness of the model’s predictions, we conduct a series of analyses, including qualitative error analysis and quantitative analysis of linguistic features with logistic regressions. The results show that local features that affect the overall sentential sentiment confuse the model: multiple target entities, transitional words, sarcasm, and rhetorical questions. Crucially, these linguistic features are independent of the model’s confidence measured by the classifier’s softmax probabilities. Interestingly, the sentence complexity indicated by syntax-tree depth is not correlated with the model’s correctness. In sum, this paper sheds light on the characteristics of the modern deep learning model and when it might need more supervision through linguistic evaluations.

pdf bib
Exploring sentiment constructions: connecting deep learning models with linguistic construction
Shu-Kai Hsieh | Yu-Hsiang Tseng
Proceedings of the 35th Pacific Asia Conference on Language, Information and Computation

2020

pdf bib
Computational Modeling of Affixoid Behavior in Chinese Morphology
Yu-Hsiang Tseng | Shu-Kai Hsieh | Pei-Yi Chen | Sara Court
Proceedings of the 28th International Conference on Computational Linguistics

The morphological status of affixes in Chinese has long been a matter of debate. How one might apply the conventional criteria of free/bound and content/function features to distinguish word-forming affixes from bound roots in Chinese is still far from clear. Issues involving polysemy and diachronic dynamics further blur the boundaries. In this paper, we propose three quantitative features in a computational model of affixoid behavior in Mandarin Chinese. The results show that, except for in a very few cases, there are no clear criteria that can be used to identify an affix’s status in an isolating language like Chinese. A diachronic check using contextualized embeddings with the WordNet Sense Inventory also demonstrates the possible role of the polysemy of lexical roots across diachronic settings.

pdf bib
From Sense to Action: A Word-Action Disambiguation Task in NLP
Shu-Kai Hsieh | Yu-Hsiang Tseng | Chiung-Yu Chiang | Richard Lian | Yong-fu Liao | Mao-Chang Ku | Ching-Fang Shih
Proceedings of the 34th Pacific Asia Conference on Language, Information and Computation

2019

pdf bib
Augmenting Chinese WordNet semantic relations with contextualized embeddings
Yu-Hsiang Tseng | Shu-Kai Hsieh
Proceedings of the 10th Global Wordnet Conference

Constructing semantic relations in WordNet has been a labour-intensive task, especially in a dynamic and fast-changing language environment. Combined with recent advancements of contextualized embeddings, this paper proposes the concept of morphology-guided sense vectors, which can be used to semi-automatically augment semantic relations in Chinese Wordnet (CWN). This paper (1) built sense vectors with pre-trained contextualized embedding models; (2) demonstrated the sense vectors computed were consistent with the sense distinctions made in CWN; and (3) predicted the potential semantically-related sense pairs with high accuracy by sense vectors model.

pdf bib
Eigencharacter: An Embedding of Chinese Character Orthography
Yu-Hsiang Tseng | Shu-Kai Hsieh
Proceedings of the Beyond Vision and LANguage: inTEgrating Real-world kNowledge (LANTERN)

Chinese characters are unique in its logographic nature, which inherently encodes world knowledge through thousands of years evolution. This paper proposes an embedding approach, namely eigencharacter (EC) space, which helps NLP application easily access the knowledge encoded in Chinese orthography. These EC representations are automatically extracted, encode both structural and radical information, and easily integrate with other computational models. We built EC representations of 5,000 Chinese characters, investigated orthography knowledge encoded in ECs, and demonstrated how these ECs identified visually similar characters with both structural and radical information.

2018

pdf bib
Fluid Annotation: A Granularity-aware Annotation Tool for Chinese Word Fluidity
Shu-Kai Hsieh | Yu-Hsiang Tseng | Chih-Yao Lee | Chiung-Yu Chiang
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)