STAGE: Simplified Text-Attributed Graph Embeddings using Pre-trained LLMs

Aaron Zolnai-Lucas, Jack Boylan, Chris Hokamp, Parsa Ghaffari


Abstract
We present STAGE, a straightforward yet effective method for enhancing node features in Graph Neural Network (GNN) models that encode Text-Attributed Graphs (TAGs). Our approach leverages Large-Language Models (LLMs) to generate embeddings for textual attributes. STAGE achieves competitive results on various node classification benchmarks while also maintaining a simplicity in implementation relative to current state-of-the-art (SoTA) techniques. We show that utilizing pre-trained LLMs as embedding generators provides robust features for ensemble GNN training, enabling pipelines that are simpler than current SoTA approaches which require multiple expensive training and prompting stages. We also implement diffusion-pattern GNNs in an effort to make this pipeline scalable to graphs beyond academic benchmarks.
Anthology ID:
2024.kallm-1.10
Volume:
Proceedings of the 1st Workshop on Knowledge Graphs and Large Language Models (KaLLM 2024)
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Russa Biswas, Lucie-Aimée Kaffee, Oshin Agarwal, Pasquale Minervini, Sameer Singh, Gerard de Melo
Venues:
KaLLM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
92–104
Language:
URL:
https://aclanthology.org/2024.kallm-1.10
DOI:
10.18653/v1/2024.kallm-1.10
Bibkey:
Cite (ACL):
Aaron Zolnai-Lucas, Jack Boylan, Chris Hokamp, and Parsa Ghaffari. 2024. STAGE: Simplified Text-Attributed Graph Embeddings using Pre-trained LLMs. In Proceedings of the 1st Workshop on Knowledge Graphs and Large Language Models (KaLLM 2024), pages 92–104, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
STAGE: Simplified Text-Attributed Graph Embeddings using Pre-trained LLMs (Zolnai-Lucas et al., KaLLM-WS 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.kallm-1.10.pdf