Zetian Wu
2022
Inducing Generalizable and Interpretable Lexica
Yilin Geng
|
Zetian Wu
|
Roshan Santhosh
|
Tejas Srivastava
|
Lyle Ungar
|
João Sedoc
Findings of the Association for Computational Linguistics: EMNLP 2022
Lexica – words and associated scores – are widely used as simple, interpretable, generalizable language features to predict sentiment, emotions, mental health, and personality. They also provide insight into the psychological features behind those moods and traits. Such lexica, historically created by human experts, are valuable to linguists, psychologists, and social scientists, but they take years of refinement and have limited coverage. In this paper, we investigate how the lexica that provide psycholinguistic insights could be computationally induced and how they should be assessed. We identify generalizability and interpretability as two essential properties of such lexica. We induce lexica using both context-oblivious and context-aware approaches, compare their predictive performance both within the training corpus and across various corpora, and evaluate their quality using crowd-worker assessment. We find that lexica induced from context-oblivious models are more generalizable and interpretable than those from more accurate context-aware transformer models. In addition, lexicon scores can identify explanatory words more reliably than a high performing transformer with feature-importance measures like SHAP.
Fine-grained Multi-lingual Disentangled Autoencoder for Language-agnostic Representation Learning
Zetian Wu
|
Zhongkai Sun
|
Zhengyang Zhao
|
Sixing Lu
|
Chengyuan Ma
|
Chenlei Guo
Proceedings of the Massively Multilingual Natural Language Understanding Workshop (MMNLU-22)
Encoding both language-specific and language-agnostic information into a single high-dimensional space is a common practice of pre-trained Multi-lingual Language Models (pMLM). Such encoding has been shown to perform effectively on natural language tasks requiring semantics of the whole sentence (e.g., translation). However, its effectiveness appears to be limited on tasks requiring partial information of the utterance (e.g., multi-lingual entity retrieval, template retrieval, and semantic alignment). In this work, a novel Fine-grained Multilingual Disentangled Autoencoder (FMDA) is proposed to disentangle fine-grained semantic information from language-specific information in a multi-lingual setting. FMDA is capable of successfully extracting the disentangled template semantic and residual semantic representations. Experiments conducted on the MASSIVE dataset demonstrate that the disentangled encoding can boost each other during the training, thus consistently outperforming the original pMLM and the strong language disentanglement baseline on monolingual template retrieval and cross-lingual semantic retrieval tasks across multiple languages.
Search
Co-authors
- Yilin Geng 1
- Roshan Santhosh 1
- Tejas Srivastava 1
- Lyle Ungar 1
- João Sedoc 1
- show all...