Improved N-Best Extraction with an Evaluation on Language Data

Johanna Björklund, Frank Drewes, Anna Jonsson


Abstract
We show that a previously proposed algorithm for the N-best trees problem can be made more efficient by changing how it arranges and explores the search space. Given an integer N and a weighted tree automaton (wta) M over the tropical semiring, the algorithm computes N trees of minimal weight with respect to M. Compared with the original algorithm, the modifications increase the laziness of the evaluation strategy, which makes the new algorithm asymptotically more efficient than its predecessor. The algorithm is implemented in the software Betty, and compared to the state-of-the-art algorithm for extracting the N best runs, implemented in the software toolkit Tiburon. The data sets used in the experiments are wtas resulting from real-world natural language processing tasks, as well as artificially created wtas with varying degrees of nondeterminism. We find that Betty outperforms Tiburon on all tested data sets with respect to running time, while Tiburon seems to be the more memory-efficient choice.
Anthology ID:
2022.cl-1.4
Volume:
Computational Linguistics, Volume 48, Issue 1 - March 2022
Month:
March
Year:
2022
Address:
Cambridge, MA
Venue:
CL
SIG:
Publisher:
MIT Press
Note:
Pages:
119–153
Language:
URL:
https://aclanthology.org/2022.cl-1.4
DOI:
10.1162/coli_a_00427
Bibkey:
Cite (ACL):
Johanna Björklund, Frank Drewes, and Anna Jonsson. 2022. Improved N-Best Extraction with an Evaluation on Language Data. Computational Linguistics, 48(1):119–153.
Cite (Informal):
Improved N-Best Extraction with an Evaluation on Language Data (Björklund et al., CL 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.cl-1.4.pdf
Video:
 https://aclanthology.org/2022.cl-1.4.mp4
Code
 tm11ajn/betty