The Development of Multimodal Lexical Resources

James Pustejovsky, Tuan Do, Gitit Kehat, Nikhil Krishnaswamy


Abstract
Human communication is a multimodal activity, involving not only speech and written expressions, but intonation, images, gestures, visual clues, and the interpretation of actions through perception. In this paper, we describe the design of a multimodal lexicon that is able to accommodate the diverse modalities that present themselves in NLP applications. We have been developing a multimodal semantic representation, VoxML, that integrates the encoding of semantic, visual, gestural, and action-based features associated with linguistic expressions.
Anthology ID:
W16-3807
Volume:
Proceedings of the Workshop on Grammar and Lexicon: interactions and interfaces (GramLex)
Month:
December
Year:
2016
Address:
Osaka, Japan
Editors:
Eva Hajičová, Igor Boguslavsky
Venue:
GramLex
SIG:
Publisher:
The COLING 2016 Organizing Committee
Note:
Pages:
41–47
Language:
URL:
https://aclanthology.org/W16-3807
DOI:
Bibkey:
Cite (ACL):
James Pustejovsky, Tuan Do, Gitit Kehat, and Nikhil Krishnaswamy. 2016. The Development of Multimodal Lexical Resources. In Proceedings of the Workshop on Grammar and Lexicon: interactions and interfaces (GramLex), pages 41–47, Osaka, Japan. The COLING 2016 Organizing Committee.
Cite (Informal):
The Development of Multimodal Lexical Resources (Pustejovsky et al., GramLex 2016)
Copy Citation:
PDF:
https://aclanthology.org/W16-3807.pdf
Data
Visual Question Answering