Face2Text revisited: Improved data set and baseline results

Marc Tanti, Shaun Abdilla, Adrian Muscat, Claudia Borg, Reuben A. Farrugia, Albert Gatt


Abstract
Current image description generation models do not transfer well to the task of describing human faces. To encourage the development of more human-focused descriptions, we developed a new data set of facial descriptions based on the CelebA image data set. We describe the properties of this data set, and present results from a face description generator trained on it, which explores the feasibility of using transfer learning from VGGFace/ResNet CNNs. Comparisons are drawn through both automated metrics and human evaluation by 76 English-speaking participants. The descriptions generated by the VGGFace-LSTM + Attention model are closest to the ground truth according to human evaluation whilst the ResNet-LSTM + Attention model obtained the highest CIDEr and CIDEr-D results (1.252 and 0.686 respectively). Together, the new data set and these experimental results provide data and baselines for future work in this area.
Anthology ID:
2022.pvlam-1.6
Volume:
Proceedings of the 2nd Workshop on People in Vision, Language, and the Mind
Month:
June
Year:
2022
Address:
Marseille, France
Editors:
Patrizia Paggio, Albert Gatt, Marc Tanti
Venue:
PVLAM
SIG:
Publisher:
European Language Resources Association
Note:
Pages:
41–47
Language:
URL:
https://aclanthology.org/2022.pvlam-1.6
DOI:
Bibkey:
Cite (ACL):
Marc Tanti, Shaun Abdilla, Adrian Muscat, Claudia Borg, Reuben A. Farrugia, and Albert Gatt. 2022. Face2Text revisited: Improved data set and baseline results. In Proceedings of the 2nd Workshop on People in Vision, Language, and the Mind, pages 41–47, Marseille, France. European Language Resources Association.
Cite (Informal):
Face2Text revisited: Improved data set and baseline results (Tanti et al., PVLAM 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.pvlam-1.6.pdf
Data
CelebAFlickr30kMS COCO