Time-and-Space-Efficient Weighted Deduction

Jason Eisner


Abstract
Many NLP algorithms have been described in terms of deduction systems. Unweighted deduction allows a generic forward-chaining execution strategy. For weighted deduction, however, efficient execution should propagate the weight of each item only after it has converged. This means visiting the items in topologically sorted order (as in dynamic programming). Toposorting is fast on a materialized graph; unfortunately, materializing the graph would take extra space. Is there a generic weighted deduction strategy which, for every acyclic deduction system and every input, uses only a constant factor more time and space than generic unweighted deduction? After reviewing past strategies, we answer this question in the affirmative by combining ideas of Goodman (1999) and Kahn (1962). We also give an extension to cyclic deduction systems, based on Tarjan (1972).
Anthology ID:
2023.tacl-1.54
Volume:
Transactions of the Association for Computational Linguistics, Volume 11
Month:
Year:
2023
Address:
Cambridge, MA
Venue:
TACL
SIG:
Publisher:
MIT Press
Note:
Pages:
960–973
Language:
URL:
https://aclanthology.org/2023.tacl-1.54
DOI:
10.1162/tacl_a_00588
Bibkey:
Cite (ACL):
Jason Eisner. 2023. Time-and-Space-Efficient Weighted Deduction. Transactions of the Association for Computational Linguistics, 11:960–973.
Cite (Informal):
Time-and-Space-Efficient Weighted Deduction (Eisner, TACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.tacl-1.54.pdf
Video:
 https://aclanthology.org/2023.tacl-1.54.mp4