Xuefei He


2024

pdf bib
Do Text-to-Vis Benchmarks Test Real Use of Visualisations?
Hy Nguyen | Xuefei He | Andrew Reeson | Cecile Paris | Josiah Poon | Jonathan Kummerfeld
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Large language models are able to generate code for visualisations in response to simple user requests.This is a useful application and an appealing one for NLP research because plots of data provide grounding for language.However, there are relatively few benchmarks, and those that exist may not be representative of what users do in practice.This paper investigates whether benchmarks reflect real-world use through an empirical study comparing benchmark datasets with code from public repositories.Our findings reveal a substantial gap, with evaluations not testing the same distribution of chart types, attributes, and actions as real-world examples.One dataset is representative, but requires extensive modification to become a practical end-to-end benchmark. This shows that new benchmarks are needed to support the development of systems that truly address users’ visualisation needs.These observations will guide future data creation, highlighting which features hold genuine significance for users.