Mehrnoosh Mirtaheri
2023
History repeats: Overcoming catastrophic forgetting for event-centric temporal knowledge graph completion
Mehrnoosh Mirtaheri
|
Mohammad Rostami
|
Aram Galstyan
Findings of the Association for Computational Linguistics: ACL 2023
Temporal knowledge graph (TKG) completion models typically rely on having access to the entire graph during training. However, in real-world scenarios, TKG data is often received incrementally as events unfold, leading to a dynamic non-stationary data distribution over time. While one could incorporate fine-tuning to existing methods to allow them to adapt to evolving TKG data, this can lead to forgetting previously learned patterns. Alternatively, retraining the model with the entire updated TKG can mitigate forgetting but is computationally burdensome. To address these challenges, we propose a general continual training framework that is applicable to any TKG completion method, and leverages two key ideas: (i) a temporal regularization that encourages repurposing of less important model parameters for learning new knowledge, and (ii) a clustering-based experience replay that reinforces the past knowledge by selectively preserving only a small portion of the past data. Our experimental results on widely used event-centric TKG datasets demonstrate the effectiveness of our proposed continual training framework in adapting to new events while reducing catastrophic forgetting. Further, we perform ablation studies to show the effectiveness of each component of our proposed framework. Finally, we investigate the relation between the memory dedicated to experience replay and the benefit gained from our clustering-based sampling strategy.
2022
StATIK: Structure and Text for Inductive Knowledge Graph Completion
Elan Markowitz
|
Keshav Balasubramanian
|
Mehrnoosh Mirtaheri
|
Murali Annavaram
|
Aram Galstyan
|
Greg Ver Steeg
Findings of the Association for Computational Linguistics: NAACL 2022
Knowledge graphs (KGs) often represent knowledge bases that are incomplete. Machine learning models can alleviate this by helping automate graph completion. Recently, there has been growing interest in completing knowledge bases that are dynamic, where previously unseen entities may be added to the KG with many missing links. In this paper, we present StATIK–Structure And Text for Inductive Knowledge Completion. StATIK uses Language Models to extract the semantic information from text descriptions, while using Message Passing Neural Networks to capture the structural information. StATIK achieves state of the art results on three challenging inductive baselines. We further analyze our hybrid model through detailed ablation studies.
Search
Co-authors
- Aram Galstyan 2
- Mohammad Rostami 1
- Elan Markowitz 1
- Keshav Balasubramanian 1
- Murali Annavaram 1
- show all...