Hongtao Xie


2024

pdf bib
Knowledge Context Modeling with Pre-trained Language Models for Contrastive Knowledge Graph Completion
Guangqian Yang | Yi Liu | Lei Zhang | Licheng Zhang | Hongtao Xie | Zhendong Mao
Findings of the Association for Computational Linguistics: ACL 2024

Text-based knowledge graph completion (KGC) methods utilize pre-trained language models for triple encoding and further fine-tune the model to achieve completion. Despite their excellent performance, they neglect the knowledge context in inferring process. Intuitively, knowledge contexts, which refer to the neighboring triples around the target triples, are important information for triple inferring, since they provide additional detailed information about the entities. To this end, we propose a novel framework named KnowC, which models the knowledge context as additional prompts with pre-trained language models for knowledge graph completion. Given the substantial number of neighbors typically associated with entities, along with the constrained input token capacity of language models, we further devise several strategies to sample the neighbors. We conduct extensive experiments on common datasets FB15k-237, WN18RR and Wikidata5M, experiments show that KnowC achieves state-of-the-art performance.

2020

pdf bib
Curriculum Learning for Natural Language Understanding
Benfeng Xu | Licheng Zhang | Zhendong Mao | Quan Wang | Hongtao Xie | Yongdong Zhang
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

With the great success of pre-trained language models, the pretrain-finetune paradigm now becomes the undoubtedly dominant solution for natural language understanding (NLU) tasks. At the fine-tune stage, target task data is usually introduced in a completely random order and treated equally. However, examples in NLU tasks can vary greatly in difficulty, and similar to human learning procedure, language models can benefit from an easy-to-difficult curriculum. Based on this idea, we propose our Curriculum Learning approach. By reviewing the trainset in a crossed way, we are able to distinguish easy examples from difficult ones, and arrange a curriculum for language models. Without any manual model architecture design or use of external data, our Curriculum Learning approach obtains significant and universal performance improvements on a wide range of NLU tasks.