%0 Conference Proceedings %T On Length Divergence Bias in Textual Matching Models %A Jiang, Lan %A Lyu, Tianshu %A Lin, Yankai %A Chong, Meng %A Lyu, Xiaoyong %A Yin, Dawei %Y Muresan, Smaranda %Y Nakov, Preslav %Y Villavicencio, Aline %S Findings of the Association for Computational Linguistics: ACL 2022 %D 2022 %8 May %I Association for Computational Linguistics %C Dublin, Ireland %F jiang-etal-2022-length %X Despite the remarkable success deep models have achieved in Textual Matching (TM) tasks, it still remains unclear whether they truly understand language or measure the semantic similarity of texts by exploiting statistical bias in datasets. In this work, we provide a new perspective to study this issue — via the length divergence bias. We find the length divergence heuristic widely exists in prevalent TM datasets, providing direct cues for prediction. To determine whether TM models have adopted such heuristic, we introduce an adversarial evaluation scheme which invalidates the heuristic. In this adversarial setting, all TM models perform worse, indicating they have indeed adopted this heuristic. Through a well-designed probing experiment, we empirically validate that the bias of TM models can be attributed in part to extracting the text length information during training. To alleviate the length divergence bias, we propose an adversarial training method. The results demonstrate we successfully improve the robustness and generalization ability of models at the same time. %R 10.18653/v1/2022.findings-acl.330 %U https://aclanthology.org/2022.findings-acl.330 %U https://doi.org/10.18653/v1/2022.findings-acl.330 %P 4187-4193