Balasubramaniam Srinivasan
2024
CoverICL: Selective Annotation for In-Context Learning via Active Graph Coverage
Costas Mavromatis
|
Balasubramaniam Srinivasan
|
Zhengyuan Shen
|
Jiani Zhang
|
Huzefa Rangwala
|
Christos Faloutsos
|
George Karypis
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
In-context learning (ICL) adapts Large Language Models (LLMs) to new tasks, without requiring any parameter updates, but few annotated examples as input. In this work, we investigate selective annotation for ICL, where there is a limited budget for annotating examples, similar to low-budget active learning (AL). Although uncertainty-based selection is unreliable with few annotated data, we present CoverICL, an adaptive graph-based selection algorithm, that effectively incorporates uncertainty sampling into selective annotation for ICL. First, CoverICL builds a nearest-neighbor graph based on the semantic similarity between candidate ICL examples. Then, CoverICL employs uncertainty estimation by the LLM to identify hard examples for the task. Selective annotation is performed over the active graph of the hard examples, adapting the process to the particular LLM used and the task tackled. CoverICL selects the most representative examples by solving a Maximum Coverage problem, approximating diversity-based sampling. Extensive experiments on ten datasets and seven LLMs show that, by incorporating uncertainty via coverage on the active graph, CoverICL (1) outperforms existing AL methods for ICL by 2–4.6% accuracy points, (2) is up to 2x more budget-efficient than SOTA methods for low-budget AL, and (3) generalizes better across tasks compared to non-graph alternatives.
2023
NameGuess: Column Name Expansion for Tabular Data
Jiani Zhang
|
Zhengyuan Shen
|
Balasubramaniam Srinivasan
|
Shen Wang
|
Huzefa Rangwala
|
George Karypis
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Recent advances in large language models have revolutionized many sectors, including the database industry. One common challenge when dealing with large volumes of tabular data is the pervasive use of abbreviated column names, which can negatively impact performance on various data search, access, and understanding tasks. To address this issue, we introduce a new task, called NameGuess, to expand column names (used in database schema) as a natural language generation problem. We create a training dataset of 384K abbreviated-expanded column pairs using a new data fabrication method and a human-annotated evaluation benchmark that includes 9.2K examples from real-world tables. To tackle the complexities associated with polysemy and ambiguity in NameGuess, we enhance auto-regressive language models by conditioning on table content and column header names – yielding a fine-tuned model (with 2.7B parameters) that matches human performance. Furthermore, we conduct a comprehensive analysis (on multiple LLMs) to validate the effectiveness of table content in NameGuess and identify promising future opportunities. Code has been made available at https://github.com/amazon-science/nameguess.
Search
Co-authors
- Zhengyuan Shen 2
- Jiani Zhang 2
- Huzefa Rangwala 2
- George Karypis 2
- Costas Mavromatis 1
- show all...