Andrew Reeson
2024
Do LLMs Generate Creative and Visually Accessible Data visualisations?
Clarissa Miranda-Pena
|
Andrew Reeson
|
Cécile Paris
|
Josiah Poon
|
Jonathan K. Kummerfeld
Proceedings of the 22nd Annual Workshop of the Australasian Language Technology Association
Data visualisation is a valuable task that combines careful data processing with creative design. Large Language Models (LLMs) are now capable of responding to a data visualisation request in natural language with code that generates accurate data visualisations (e.g., using Matplotlib), but what about human-centered factors, such as the creativity and accessibility of the data visualisations? In this work, we study human perceptions of creativity in the data visualisations generated by LLMs, and propose metrics for accessibility. We generate a range of visualisations using GPT-4 and Claude-2 with controlled variations in prompt and inference parameters, to encourage the generation of different types of data visualisations for the same data. Subsets of these data visualisations are presented to people in a survey with questions that probe human perceptions of different aspects of creativity and accessibility. We find that the models produce visualisations that are novel, but not surprising. Our results also show that our accessibility metrics are consistent with human judgements. In all respects, the LLMs underperform visualisations produced by human-written code. To go beyond the simplest requests, these models need to become aware of human-centered factors, while maintaining accuracy.
Do Text-to-Vis Benchmarks Test Real Use of Visualisations?
Hy Nguyen
|
Xuefei He
|
Andrew Reeson
|
Cecile Paris
|
Josiah Poon
|
Jonathan K. Kummerfeld
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Large language models are able to generate code for visualisations in response to simple user requests.This is a useful application and an appealing one for NLP research because plots of data provide grounding for language.However, there are relatively few benchmarks, and those that exist may not be representative of what users do in practice.This paper investigates whether benchmarks reflect real-world use through an empirical study comparing benchmark datasets with code from public repositories.Our findings reveal a substantial gap, with evaluations not testing the same distribution of chart types, attributes, and actions as real-world examples.One dataset is representative, but requires extensive modification to become a practical end-to-end benchmark. This shows that new benchmarks are needed to support the development of systems that truly address users’ visualisation needs.These observations will guide future data creation, highlighting which features hold genuine significance for users.
Search
Fix data
Co-authors
- Jonathan K. Kummerfeld 2
- Cecile Paris 2
- Josiah Poon 2
- Xuefei He 1
- Clarissa Miranda-Pena 1
- show all...