Koki Maeda
2023
Query-based Image Captioning from Multi-context 360cdegree Images
Koki Maeda
|
Shuhei Kurita
|
Taiki Miyanishi
|
Naoaki Okazaki
Findings of the Association for Computational Linguistics: EMNLP 2023
A 360-degree image captures the entire scene without the limitations of a camera’s field of view, which makes it difficult to describe all the contexts in a single caption. We propose a novel task called Query-based Image Captioning (QuIC) for 360-degree images, where a query (words or short phrases) specifies the context to describe. This task is more challenging than the conventional image captioning task, which describes salient objects in images, as it requires fine-grained scene understanding to select the contents consistent with user’s intent based on the query. We construct a dataset for the new task that comprises 3,940 360-degree images and 18,459 pairs of queries and captions annotated manually. Experiments demonstrate that fine-tuning image captioning models further on our dataset can generate more diverse and controllable captions from multiple contexts of 360-degree images.
DueT: Image-Text Contrastive Transfer Learning with Dual-adapter Tuning
Taku Hasegawa
|
Kyosuke Nishida
|
Koki Maeda
|
Kuniko Saito
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
This paper presents DueT, a novel transfer learning method for vision and language models built by contrastive learning. In DueT, adapters are inserted into the image and text encoders, which have been initialized using models pre-trained on uni-modal corpora and then frozen. By training only these adapters, DueT enables efficient learning with a reduced number of trainable parameters. Moreover, unlike traditional adapters, those in DueT are equipped with a gating mechanism, enabling effective transfer and connection of knowledge acquired from pre-trained uni-modal encoders while preventing catastrophic forgetting. We report that DueT outperformed simple fine-tuning, the conventional method fixing only the image encoder and training only the text encoder, and the LoRA-based adapter method in accuracy and parameter efficiency for 0-shot image and text retrieval in both English and Japanese domains.
2022
IMPARA: Impact-Based Metric for GEC Using Parallel Data
Koki Maeda
|
Masahiro Kaneko
|
Naoaki Okazaki
Proceedings of the 29th International Conference on Computational Linguistics
Automatic evaluation of grammatical error correction (GEC) is essential in developing useful GEC systems. Existing methods for automatic evaluation require multiple reference sentences or manual scores. However, such resources are expensive, thereby hindering automatic evaluation for various domains and correction styles. This paper proposes an Impact-based Metric for GEC using PARAllel data, IMPARA, which utilizes correction impacts computed by parallel data comprising pairs of grammatical/ungrammatical sentences. As parallel data is cheaper than manually assessing evaluation scores, IMPARA can reduce the cost of data creation for automatic evaluation. Correlations between IMPARA and human scores indicate that IMPARA is comparable or better than existing evaluation methods. Furthermore, we find that IMPARA can perform evaluations that fit different domains and correction styles trained on various parallel data.
Search
Co-authors
- Naoaki Okazaki 2
- Shuhei Kurita 1
- Taiki Miyanishi 1
- Taku Hasegawa 1
- Kyosuke Nishida 1
- show all...