Natural Language Grounding and Grammar Induction for Robotic Manipulation Commands

Muhannad Alomari, Paul Duckworth, Majd Hawasly, David C. Hogg, Anthony G. Cohn


Abstract
We present a cognitively plausible system capable of acquiring knowledge in language and vision from pairs of short video clips and linguistic descriptions. The aim of this work is to teach a robot manipulator how to execute natural language commands by demonstration. This is achieved by first learning a set of visual ‘concepts’ that abstract the visual feature spaces into concepts that have human-level meaning. Second, learning the mapping/grounding between words and the extracted visual concepts. Third, inducing grammar rules via a semantic representation known as Robot Control Language (RCL). We evaluate our approach against state-of-the-art supervised and unsupervised grounding and grammar induction systems, and show that a robot can learn to execute never seen-before commands from pairs of unlabelled linguistic and visual inputs.
Anthology ID:
W17-2805
Volume:
Proceedings of the First Workshop on Language Grounding for Robotics
Month:
August
Year:
2017
Address:
Vancouver, Canada
Editors:
Mohit Bansal, Cynthia Matuszek, Jacob Andreas, Yoav Artzi, Yonatan Bisk
Venue:
RoboNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
35–43
Language:
URL:
https://aclanthology.org/W17-2805
DOI:
10.18653/v1/W17-2805
Bibkey:
Cite (ACL):
Muhannad Alomari, Paul Duckworth, Majd Hawasly, David C. Hogg, and Anthony G. Cohn. 2017. Natural Language Grounding and Grammar Induction for Robotic Manipulation Commands. In Proceedings of the First Workshop on Language Grounding for Robotics, pages 35–43, Vancouver, Canada. Association for Computational Linguistics.
Cite (Informal):
Natural Language Grounding and Grammar Induction for Robotic Manipulation Commands (Alomari et al., RoboNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/W17-2805.pdf