Meng Tang
2022
Divide and Conquer: Text Semantic Matching with Disentangled Keywords and Intents
Yicheng Zou
|
Hongwei Liu
|
Tao Gui
|
Junzhe Wang
|
Qi Zhang
|
Meng Tang
|
Haixiang Li
|
Daniell Wang
Findings of the Association for Computational Linguistics: ACL 2022
Text semantic matching is a fundamental task that has been widely used in various scenarios, such as community question answering, information retrieval, and recommendation. Most state-of-the-art matching models, e.g., BERT, directly perform text comparison by processing each word uniformly. However, a query sentence generally comprises content that calls for different levels of matching granularity. Specifically, keywords represent factual information such as action, entity, and event that should be strictly matched, while intents convey abstract concepts and ideas that can be paraphrased into various expressions. In this work, we propose a simple yet effective training strategy for text semantic matching in a divide-and-conquer manner by disentangling keywords from intents. Our approach can be easily combined with pre-trained language models (PLM) without influencing their inference efficiency, achieving stable performance improvements against a wide range of PLMs on three benchmarks.
2019
Learning Compressed Sentence Representations for On-Device Text Processing
Dinghan Shen
|
Pengyu Cheng
|
Dhanasekar Sundararaman
|
Xinyuan Zhang
|
Qian Yang
|
Meng Tang
|
Asli Celikyilmaz
|
Lawrence Carin
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Vector representations of sentences, trained on massive text corpora, are widely used as generic sentence embeddings across a variety of NLP problems. The learned representations are generally assumed to be continuous and real-valued, giving rise to a large memory footprint and slow retrieval speed, which hinders their applicability to low-resource (memory and computation) platforms, such as mobile devices. In this paper, we propose four different strategies to transform continuous and generic sentence embeddings into a binarized form, while preserving their rich semantic information. The introduced methods are evaluated across a wide range of downstream tasks, where the binarized sentence embeddings are demonstrated to degrade performance by only about 2% relative to their continuous counterparts, while reducing the storage requirement by over 98%. Moreover, with the learned binary representations, the semantic relatedness of two sentences can be evaluated by simply calculating their Hamming distance, which is more computational efficient compared with the inner product operation between continuous embeddings. Detailed analysis and case study further validate the effectiveness of proposed methods.
Search
Co-authors
- Dinghan Shen 1
- Pengyu Cheng 1
- Dhanasekar Sundararaman 1
- Xinyuan Zhang 1
- Qian Yang 1
- show all...