Yuan Cui
2024
BERT-BC: A Unified Alignment and Interaction Model over Hierarchical BERT for Response Selection
Zhenfei Yang
|
Beiming Yu
|
Yuan Cui
|
Shi Feng
|
Daling Wang
|
Yifei Zhang
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Recently, we have witnessed a significant performance boosting for dialogue response selection task achieved by Cross-Encoder based models. However, such models directly feed the concatenation of context and response into the pre-trained model for interactive inference, ignoring the comprehensively independent representation modeling of context and response. Moreover, randomly sampling negative responses from other dialogue contexts is simplistic, and the learned models have poor generalization capability in realistic scenarios. In this paper, we propose a response selection model called BERT-BC that combines the representation-based Bi-Encoder and interaction-based Cross-Encoder. Three contrastive learning methods are devised for the Bi-Encoder to align context and response to obtain the better semantic representation. Meanwhile, according to the alignment difficulty of context and response semantics, the harder samples are dynamically selected from the same batch with negligible cost and sent to Cross-Encoder to enhance the model’s interactive reasoning ability. Experimental results show that BERT-BC can achieve state-of-the-art performance on three benchmark datasets for multi-turn response selection.
Improving Role-Oriented Dialogue Summarization with Interaction-Aware Contrastive Learning
Weihong Guan
|
Shi Feng
|
Daling Wang
|
Faliang Huang
|
Yifei Zhang
|
Yuan Cui
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Role-oriented dialogue summarization aims at generating summaries for different roles in dialogue, e.g., user and agent. Interaction between different roles is vital for the task. Existing methods could not fully capture interaction patterns between roles when encoding dialogue, thus are prone to ignore the interaction-related key information. In this paper, we propose a contrastive learning based interaction-aware model for the role-oriented dialogue summarization namely CIAM. An interaction-aware contrastive objective is constructed to guide the encoded dialogue representation to learn role-level interaction. The representation is then used by the decoder to generate role-oriented summaries. The contrastive objective is trained jointly with the primary dialogue summarization task. Additionally, we innovatively utilize different decoder start tokens to control what kind of summary to generate, thus could generate different role-oriented summaries with a unified model. Experimental results show that our method achieves new state-of-the-art results on two public datasets. Extensive analyses further demonstrate that our method excels at capturing interaction information between different roles and producing informative summaries.
Search
Co-authors
- Beiming Yu 1
- Daling Wang 2
- Faliang Huang 1
- Shi Feng 2
- Weihong Guan 1
- show all...