Fine-tuning Language Models for Triple Extraction with Data Augmentation

Yujia Zhang, Tyler Sadler, Mohammad Reza Taesiri, Wenjie Xu, Marek Reformat


Abstract
Advanced language models with impressive capabilities to process textual information can more effectively extract high-quality triples, which are the building blocks of knowledge graphs. Our work examines language models’ abilities to extract entities and the relationships between them. We use a diverse data augmentation process to fine-tune large language models to extract triples from the text. Fine-tuning is performed using a mix of trainers from HuggingFace and five public datasets, such as different variations of the WebNLG, SKE, DocRed, FewRel, and KELM. Evaluation involves comparing model outputs with test-set triples based on several criteria, such as type, partial, exact, and strict accuracy.The obtained results outperform ChatGPT and even match or exceed the performance of GPT-4.
Anthology ID:
2024.kallm-1.12
Volume:
Proceedings of the 1st Workshop on Knowledge Graphs and Large Language Models (KaLLM 2024)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Russa Biswas, Lucie-Aimée Kaffee, Oshin Agarwal, Pasquale Minervini, Sameer Singh, Gerard de Melo
Venues:
KaLLM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
116–124
Language:
URL:
https://aclanthology.org/2024.kallm-1.12
DOI:
10.18653/v1/2024.kallm-1.12
Bibkey:
Cite (ACL):
Yujia Zhang, Tyler Sadler, Mohammad Reza Taesiri, Wenjie Xu, and Marek Reformat. 2024. Fine-tuning Language Models for Triple Extraction with Data Augmentation. In Proceedings of the 1st Workshop on Knowledge Graphs and Large Language Models (KaLLM 2024), pages 116–124, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Fine-tuning Language Models for Triple Extraction with Data Augmentation (Zhang et al., KaLLM-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.kallm-1.12.pdf