Yunbin Tu
2024
Context-aware Difference Distilling for Multi-change Captioning
Yunbin Tu
|
Liang Li
|
Li Su
|
Zheng-Jun Zha
|
Chenggang Yan
|
Qingming Huang
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Multi-change captioning aims to describe complex and coupled changes within an image pair in natural language. Compared with single-change captioning, this task requires the model to have higher-level cognition ability to reason an arbitrary number of changes. In this paper, we propose a novel context-aware difference distilling (CARD) network to capture all genuine changes for yielding sentences. Given an image pair, CARD first decouples context features that aggregate all similar/dissimilar semantics, termed common/difference context features. Then, the consistency and independence constraints are designed to guarantee the alignment/discrepancy of common/difference context features. Further, the common context features guide the model to mine locally unchanged features, which are subtracted from the pair to distill locally difference features. Next, the difference context features augment the locally difference features to ensure that all changes are distilled. In this way, we obtain an omni-representation of all changes, which is translated into linguistic sentences by a transformer decoder. Extensive experiments on three public datasets show CARD performs favourably against state-of-the-art methods. The code is available at https://github.com/tuyunbin/CARD.
2021
Semantic Relation-aware Difference Representation Learning for Change Captioning
Yunbin Tu
|
Tingting Yao
|
Liang Li
|
Jiedong Lou
|
Shengxiang Gao
|
Zhengtao Yu
|
Chenggang Yan
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
Rˆ3Net:Relation-embedded Representation Reconstruction Network for Change Captioning
Yunbin Tu
|
Liang Li
|
Chenggang Yan
|
Shengxiang Gao
|
Zhengtao Yu
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Change captioning is to use a natural language sentence to describe the fine-grained disagreement between two similar images. Viewpoint change is the most typical distractor in this task, because it changes the scale and location of the objects and overwhelms the representation of real change. In this paper, we propose a Relation-embedded Representation Reconstruction Network (Rˆ3Net) to explicitly distinguish the real change from the large amount of clutter and irrelevant changes. Specifically, a relation-embedded module is first devised to explore potential changed objects in the large amount of clutter. Then, based on the semantic similarities of corresponding locations in the two images, a representation reconstruction module (RRM) is designed to learn the reconstruction representation and further model the difference representation. Besides, we introduce a syntactic skeleton predictor (SSP) to enhance the semantic interaction between change localization and caption generation. Extensive experiments show that the proposed method achieves the state-of-the-art results on two public datasets.
Search
Co-authors
- Liang Li 3
- Chenggang Yan 3
- Shengxiang Gao 2
- Zhengtao Yu 2
- Tingting Yao 1
- show all...