Zhao Han


2023

pdf bib
Exploring the Naturalness of Cognitive Status-Informed Referring Form Selection Models
Gabriel Del Castillo | Grace Clark | Zhao Han | Tom Williams
Proceedings of the 16th International Natural Language Generation Conference

Language-capable robots must be able to efficiently and naturally communicate about objects in the environment. A key part of communication is Referring Form Selection (RFS): the process of selecting a form like it, that, or the N to use when referring to an object. Recent cognitive status-informed computational RFS models have been evaluated in terms of goodness-of-fit to human data. But it is as yet unclear whether these models actually select referring forms that are any more natural than baseline alternatives, regardless of goodness-of-fit. Through a human subject study designed to assess this question, we show that even though cognitive status-informed referring selection models achieve good fit to human data, they do not (yet) produce concrete benefits in terms of naturality. On the other hand, our results show that human utterances also had high variability in perceived naturality, demonstrating the challenges of evaluating RFS naturality.

2022

pdf bib
Evaluating Referring Form Selection Models in Partially-Known Environments
Zhao Han | Polina Rygina | Thomas Williams
Proceedings of the 15th International Conference on Natural Language Generation