Bin Guo
2024
AntLM: Bridging Causal and Masked Language Models
Xinru Yu
|
Bin Guo
|
Shiwei Luo
|
Jie Wang
|
Tao Ji
|
Yuanbin Wu
The 2nd BabyLM Challenge at the 28th Conference on Computational Natural Language Learning
Causal Language Modeling (CLM) and Masked Language Modeling (MLM) are two mainstream learning paradigms based on Transformer networks, specifically the Decoder-only and Encoder-only architectures. The strengths of each paradigm in downstream tasks have shown a mix of advantages and disadvantages. In the past BabyLM Challenge 2023, although the MLM paradigm achieved the best average performance, the CLM paradigm demonstrated significantly faster convergence rates. For the BabyLM Challenge 2024, we propose a novel language modeling paradigm named AntLM, which integrates both CLM and MLM to leverage the advantages of these two classic paradigms. We chose the strict-small track and conducted experiments on two foundation models: BabyLlama, representing CLM, and LTG-BERT, representing MLM. During the training process for specific foundation models, we alternate between applying CLM or MLM training objectives and causal or bidirectional attention masks. Experimental results show that combining the two pretraining objectives leverages their strengths, enhancing overall training performance. Under the same epochs, AntLMBabyLlama improves Macro-average by 1%, and AntLMLTG-BERT achieves a 2.2% increase over the baselines.
2023
PersonaPKT: Building Personalized Dialogue Agents via Parameter-efficient Knowledge Transfer
Xu Han
|
Bin Guo
|
Yoon Jung
|
Benjamin Yao
|
Yu Zhang
|
Xiaohu Liu
|
Chenlei Guo
Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing (SustaiNLP)
2022
Joint Goal Segmentation and Goal Success Prediction on Multi-Domain Conversations
Meiguo Wang
|
Benjamin Yao
|
Bin Guo
|
Xiaohu Liu
|
Yu Zhang
|
Tuan-Hung Pham
|
Chenlei Guo
Proceedings of the 29th International Conference on Computational Linguistics
To evaluate the performance of a multi-domain goal-oriented Dialogue System (DS), it is important to understand what the users’ goals are for the conversations and whether those goals are successfully achieved. The success rate of goals directly correlates with user satisfaction and perceived usefulness of the DS. In this paper, we propose a novel automatic dialogue evaluation framework that jointly performs two tasks: goal segmentation and goal success prediction. We extend the RoBERTa-IQ model (Gupta et al., 2021) by adding multi-task learning heads for goal segmentation and success prediction. Using an annotated dataset from a commercial DS, we demonstrate that our proposed model reaches an accuracy that is on-par with single-pass human annotation comparing to a three-pass gold annotation benchmark.