%0 Conference Proceedings %T Coreference by Appearance: Visually Grounded Event Coreference Resolution %A Wang, Liming %A Feng, Shengyu %A Lin, Xudong %A Li, Manling %A Ji, Heng %A Chang, Shih-Fu %Y Ogrodniczuk, Maciej %Y Pradhan, Sameer %Y Poesio, Massimo %Y Grishina, Yulia %Y Ng, Vincent %S Proceedings of the Fourth Workshop on Computational Models of Reference, Anaphora and Coreference %D 2021 %8 November %I Association for Computational Linguistics %C Punta Cana, Dominican Republic %F wang-etal-2021-coreference %X Event coreference resolution is critical to understand events in the growing number of online news with multiple modalities including text, video, speech, etc. However, the events and entities depicting in different modalities may not be perfectly aligned and can be difficult to annotate, which makes the task especially challenging with little supervision available. To address the above issues, we propose a supervised model based on attention mechanism and an unsupervised model based on statistical machine translation, capable of learning the relative importance of modalities for event coreference resolution. Experiments on a video multimedia event dataset show that our multimodal models outperform text-only systems in event coreference resolution tasks. A careful analysis reveals that the performance gain of the multimodal model especially under unsupervised settings comes from better learning of visually salient events. %R 10.18653/v1/2021.crac-1.14 %U https://aclanthology.org/2021.crac-1.14 %U https://doi.org/10.18653/v1/2021.crac-1.14 %P 132-140