Xingzhang Ren
2023
Dynamic Voting for Efficient Reasoning in Large Language Models
Mingfeng Xue
|
Dayiheng Liu
|
Wenqiang Lei
|
Xingzhang Ren
|
Baosong Yang
|
Jun Xie
|
Yidan Zhang
|
Dezhong Peng
|
Jiancheng Lv
Findings of the Association for Computational Linguistics: EMNLP 2023
Multi-path voting methods like Self-consistency have been used to mitigate reasoning errors in large language models caused by factual errors and illusion generation. However, these methods require excessive computing resources as they generate numerous reasoning paths for each problem. And our experiments show that on the arithmetic reasoning task, SVAMP, half of the problems fail to obtain noticeable accuracy gains when voting with more than three paths. In this paper, we propose a novel multi-path voting technique called Dynamic Voting, which effectively reduces the number of reasoning paths during multi-path voting while preserving accuracies by applying early exiting for problems that large language models can confidently solve. Experimental evaluations on arithmetic, commonsense, and symbolic reasoning tasks under few-shot and zero-shot settings demonstrate that Dynamic Voting achieves comparable accuracies employing significantly fewer reasoning paths. Notably, one of our Dynamic Voting strategies outperforms Self-consistency using only 24.7% of the number of paths on the LetterConcat task in the few-shot setting. Furthermore, Dynamic Voting showcases strong robustness in threshold selection. It also demonstrates excellent generalizability when combined with other voting techniques, different models, and diverse prompts.
2022
Unsupervised Preference-Aware Language Identification
Xingzhang Ren
|
Baosong Yang
|
Dayiheng Liu
|
Haibo Zhang
|
Xiaoyu Lv
|
Liang Yao
|
Jun Xie
Findings of the Association for Computational Linguistics: ACL 2022
Recognizing the language of ambiguous texts has become a main challenge in language identification (LID). When using multilingual applications, users have their own language preferences, which can be regarded as external knowledge for LID. Nevertheless, current studies do not consider the inter-personal variations due to the lack of user annotated training data. To fill this gap, we introduce preference-aware LID and propose a novel unsupervised learning strategy. Concretely, we construct pseudo training set for each user by extracting training samples from a standard LID corpus according to his/her historical language distribution. Besides, we contribute the first user labeled LID test set called “U-LID”. Experimental results reveal that our model can incarnate user traits and significantly outperforms existing LID systems on handling ambiguous texts. Our code and benchmark have been released.
Effective Approaches to Neural Query Language Identification
Xingzhang Ren
|
Baosong Yang
|
Dayiheng Liu
|
Haibo Zhang
|
Xiaoyu Lv
|
Liang Yao
|
Jun Xie
Computational Linguistics, Volume 48, Issue 4 - December 2022
Query language identification (Q-LID) plays a crucial role in a cross-lingual search engine. There exist two main challenges in Q-LID: (1) insufficient contextual information in queries for disambiguation; and (2) the lack of query-style training examples for low-resource languages. In this article, we propose a neural Q-LID model by alleviating the above problems from both model architecture and data augmentation perspectives. Concretely, we build our model upon the advanced Transformer model. In order to enhance the discrimination of queries, a variety of external features (e.g., character, word, as well as script) are fed into the model and fused by a multi-scale attention mechanism. Moreover, to remedy the low resource challenge in this task, a novel machine translation–based strategy is proposed to automatically generate synthetic query-style data for low-resource languages. We contribute the first Q-LID test set called QID-21, which consists of search queries in 21 languages. Experimental results reveal that our model yields better classification accuracy than strong baselines and existing LID systems on both query and traditional LID tasks.1
Search
Co-authors
- Dayiheng Liu 3
- Baosong Yang* 3
- Jun Xie 3
- Haibo Zhang 2
- Xiaoyu Lv 2
- show all...