Zhengyu Hu


2024

pdf bib
Let’s Ask GNN: Empowering Large Language Model for Graph In-Context Learning
Zhengyu Hu | Yichuan Li | Zhengyu Chen | Jingang Wang | Han Liu | Kyumin Lee | Kaize Ding
Findings of the Association for Computational Linguistics: EMNLP 2024

Textual Attributed Graphs (TAGs) are crucial for modeling complex real-world systems, yet leveraging large language models (LLMs) for TAGs presents unique challenges due to the gap between sequential text processing and graph-structured data. We introduce AskGNN, a novel approach that bridges this gap by leveraging In-Context Learning (ICL) to integrate graph data and task-specific information into LLMs. AskGNN employs a Graph Neural Network (GNN)-powered structure-enhanced retriever to select labeled nodes across graphs, incorporating complex graph structures and their supervision signals. Our learning-to-retrieve algorithm optimizes the retriever to select example nodes that maximize LLM performance on graph. Experiments across three tasks and seven LLMs demonstrate AskGNN’s superior effectiveness in graph task performance, opening new avenues for applying LLMs to graph-structured data without extensive fine-tuning.