Daniel Krefl
2024
Zero- and Few-Shots Knowledge Graph Triplet Extraction with Large Language Models
Andrea Papaluca
|
Daniel Krefl
|
Sergio Rodríguez Méndez
|
Artem Lensky
|
Hanna Suominen
Proceedings of the 1st Workshop on Knowledge Graphs and Large Language Models (KaLLM 2024)
In this work, we tested the Triplet Extraction (TE) capabilities of a variety of Large Language Models (LLMs) of different sizes in the Zero- and Few-Shots settings. In detail, we proposed a pipeline that dynamically gathers contextual information from a Knowledge Base (KB), both in the form of context triplets and of (sentence, triplets) pairs as examples, and provides it to the LLM through a prompt. The additional context allowed the LLMs to be competitive with all the older fully trained baselines based on the Bidirectional Long Short-Term Memory (BiLSTM) Network architecture. We further conducted a detailed analysis of the quality of the gathered KB context, finding it to be strongly correlated with the final TE performance of the model. In contrast, the size of the model appeared to only logarithmically improve the TE capabilities of the LLMs. We release the code on GitHub for reproducibility.
2022
Pretrained Knowledge Base Embeddings for improved Sentential Relation Extraction
Andrea Papaluca
|
Daniel Krefl
|
Hanna Suominen
|
Artem Lenskiy
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop
In this work we put forward to combine pretrained knowledge base graph embeddings with transformer based language models to improve performance on the sentential Relation Extraction task in natural language processing. Our proposed model is based on a simple variation of existing models to incorporate off-task pretrained graph embeddings with an on-task finetuned BERT encoder. We perform a detailed statistical evaluation of the model on standard datasets. We provide evidence that the added graph embeddings improve the performance, making such a simple approach competitive with the state-of-the-art models that perform explicit on-task training of the graph embeddings. Furthermore, we ob- serve for the underlying BERT model an interesting power-law scaling behavior between the variance of the F1 score obtained for a relation class and its support in terms of training examples.
Search