Kyoung-Rok Jang
2021
Ultra-High Dimensional Sparse Representations with Binarization for Efficient Text Retrieval
Kyoung-Rok Jang
|
Junmo Kang
|
Giwon Hong
|
Sung-Hyon Myaeng
|
Joohee Park
|
Taewon Yoon
|
Heecheol Seo
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
The semantic matching capabilities of neural information retrieval can ameliorate synonymy and polysemy problems of symbolic approaches. However, neural models’ dense representations are more suitable for re-ranking, due to their inefficiency. Sparse representations, either in symbolic or latent form, are more efficient with an inverted index. Taking the merits of the sparse and dense representations, we propose an ultra-high dimensional (UHD) representation scheme equipped with directly controllable sparsity. UHD’s large capacity and minimal noise and interference among the dimensions allow for binarized representations, which are highly efficient for storage and search. Also proposed is a bucketing method, where the embeddings from multiple layers of BERT are selected/merged to represent diverse linguistic aspects. We test our models with MS MARCO and TREC CAR, showing that our models outperforms other sparse models.
2018
Interpretable Word Embedding Contextualization
Kyoung-Rok Jang
|
Sung-Hyon Myaeng
|
Sang-Bum Kim
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
In this paper, we propose a method of calibrating a word embedding, so that the semantic it conveys becomes more relevant to the context. Our method is novel because the output shows clearly which senses that were originally presented in a target word embedding become stronger or weaker. This is possible by utilizing the technique of using sparse coding to recover senses that comprises a word embedding.
2017
Elucidating Conceptual Properties from Word Embeddings
Kyoung-Rok Jang
|
Sung-Hyon Myaeng
Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications
In this paper, we introduce a method of identifying the components (i.e. dimensions) of word embeddings that strongly signifies properties of a word. By elucidating such properties hidden in word embeddings, we could make word embeddings more interpretable, and also could perform property-based meaning comparison. With the capability, we can answer questions like “To what degree a given word has the property cuteness?” or “In what perspective two words are similar?”. We verify our method by examining how the strength of property-signifying components correlates with the degree of prototypicality of a target word.
Search
Co-authors
- Sung-Hyon Myaeng 3
- Junmo Kang 1
- Giwon Hong 1
- Joohee Park 1
- Taewon Yoon 1
- show all...