Zhenzi Li
2024
Leveraging Estimated Transferability Over Human Intuition for Model Selection in Text Ranking
Jun Bai
|
Zhuofan Chen
|
Zhenzi Li
|
Hanhua Hong
|
Jianfei Zhang
|
Chen Li
|
Chenghua Lin
|
Wenge Rong
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Text ranking has witnessed significant advancements, attributed to the utilization of dual-encoder enhanced by Pre-trained Language Models (PLMs). Given the proliferation of available PLMs, selecting the most effective one for a given dataset has become a non-trivial challenge. As a promising alternative to human intuition and brute-force fine-tuning, Transferability Estimation (TE) has emerged as an effective approach to model selection. However, current TE methods are primarily designed for classification tasks, and their estimated transferability may not align well with the objectives of text ranking. To address this challenge, we propose to compute the expected rank as transferability, explicitly reflecting the model’s ranking capability. Furthermore, to mitigate anisotropy and incorporate training dynamics, we adaptively scale isotropic sentence embeddings to yield an accurate expected rank score. Our resulting method, Adaptive Ranking Transferability (AiRTran), can effectively capture subtle differences between models. On challenging model selection scenarios across various text ranking datasets, it demonstrates significant improvements over previous classification-oriented TE methods, human intuition, and ChatGPT with minor time consumption.
2020
Modelling Long-distance Node Relations for KBQA with Global Dynamic Graph
Xu Wang
|
Shuai Zhao
|
Jiale Han
|
Bo Cheng
|
Hao Yang
|
Jianchang Ao
|
Zhenzi Li
Proceedings of the 28th International Conference on Computational Linguistics
The structural information of Knowledge Bases (KBs) has proven effective to Question Answering (QA). Previous studies rely on deep graph neural networks (GNNs) to capture rich structural information, which may not model node relations in particularly long distance due to oversmoothing issue. To address this challenge, we propose a novel framework GlobalGraph, which models long-distance node relations from two views: 1) Node type similarity: GlobalGraph assigns each node a global type label and models long-distance node relations through the global type label similarity; 2) Correlation between nodes and questions: we learn similarity scores between nodes and the question, and model long-distance node relations through the sum score of two nodes. We conduct extensive experiments on two widely used multi-hop KBQA datasets to prove the effectiveness of our method.
Search
Co-authors
- Jun Bai 1
- Zhuofan Chen 1
- Hanhua Hong 1
- Jianfei Zhang 1
- Chen Li 1
- show all...