XferBench: a Data-Driven Benchmark for Emergent Language

Brendon Boldt, David Mortensen


Abstract
In this paper, we introduce a benchmark for evaluating the overall quality of emergent languages using data-driven methods. Specifically, we interpret the notion of the “quality” of an emergent language as its similarity to human language within a deep learning framework. We measure this by using the emergent language as pretraining data for a downstream NLP tasks in human language—the better the downstream performance, the better the emergent language. We implement this benchmark as an easy-to-use Python package that only requires a text file of utterances from the emergent language to be evaluated. Finally, we empirically test the benchmark’s validity using human, synthetic, and emergent language baselines.
Anthology ID:
2024.naacl-long.82
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1475–1489
Language:
URL:
https://aclanthology.org/2024.naacl-long.82
DOI:
Bibkey:
Cite (ACL):
Brendon Boldt and David Mortensen. 2024. XferBench: a Data-Driven Benchmark for Emergent Language. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 1475–1489, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
XferBench: a Data-Driven Benchmark for Emergent Language (Boldt & Mortensen, NAACL 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.naacl-long.82.pdf
Copyright:
 2024.naacl-long.82.copyright.pdf