Identifying the limits of transformers when performing model-checking with natural language

Tharindu Madusanka, Riza Batista-navarro, Ian Pratt-hartmann


Abstract
Can transformers learn to comprehend logical semantics in natural language? Although many strands of work on natural language inference have focussed on transformer models’ ability to perform reasoning on text, the above question has not been answered adequately. This is primarily because the logical problems that have been studied in the context of natural language inference have their computational complexity vary with the logical and grammatical constructs within the sentences. As such, it is difficult to access whether the difference in accuracy is due to logical semantics or the difference in computational complexity. A problem that is much suited to address this issue is that of the model-checking problem, whose computational complexity remains constant (for fragments derived from first-order logic). However, the model-checking problem remains untouched in natural language inference research. Thus, we investigated the problem of model-checking with natural language to adequately answer the question of how the logical semantics of natural language affects transformers’ performance. Our results imply that the language fragment has a significant impact on the performance of transformer models. Furthermore, we hypothesise that a transformer model can at least partially understand the logical semantics in natural language but can not completely learn the rules governing the model-checking algorithm.
Anthology ID:
2023.eacl-main.257
Volume:
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
Month:
May
Year:
2023
Address:
Dubrovnik, Croatia
Editors:
Andreas Vlachos, Isabelle Augenstein
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3539–3550
Language:
URL:
https://aclanthology.org/2023.eacl-main.257
DOI:
10.18653/v1/2023.eacl-main.257
Award:
 EACL Outstanding Paper
Bibkey:
Cite (ACL):
Tharindu Madusanka, Riza Batista-navarro, and Ian Pratt-hartmann. 2023. Identifying the limits of transformers when performing model-checking with natural language. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 3539–3550, Dubrovnik, Croatia. Association for Computational Linguistics.
Cite (Informal):
Identifying the limits of transformers when performing model-checking with natural language (Madusanka et al., EACL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.eacl-main.257.pdf
Video:
 https://aclanthology.org/2023.eacl-main.257.mp4