SHAPELURN: An Interactive Language Learning Game with Logical Inference

Katharina Stein, Leonie Harter, Luisa Geiger


Abstract
We investigate if a model can learn natural language with minimal linguistic input through interaction. Addressing this question, we design and implement an interactive language learning game that learns logical semantic representations compositionally. Our game allows us to explore the benefits of logical inference for natural language learning. Evaluation shows that the model can accurately narrow down potential logical representations for words over the course of the game, suggesting that our model is able to learn lexical mappings from scratch successfully.
Anthology ID:
2021.internlp-1.3
Volume:
Proceedings of the First Workshop on Interactive Learning for Natural Language Processing
Month:
August
Year:
2021
Address:
Online
Editors:
Kianté Brantley, Soham Dan, Iryna Gurevych, Ji-Ung Lee, Filip Radlinski, Hinrich Schütze, Edwin Simpson, Lili Yu
Venue:
InterNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16–24
Language:
URL:
https://aclanthology.org/2021.internlp-1.3
DOI:
10.18653/v1/2021.internlp-1.3
Bibkey:
Cite (ACL):
Katharina Stein, Leonie Harter, and Luisa Geiger. 2021. SHAPELURN: An Interactive Language Learning Game with Logical Inference. In Proceedings of the First Workshop on Interactive Learning for Natural Language Processing, pages 16–24, Online. Association for Computational Linguistics.
Cite (Informal):
SHAPELURN: An Interactive Language Learning Game with Logical Inference (Stein et al., InterNLP 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.internlp-1.3.pdf
Code
 itsluisa/shapelurn