2024
pdf
bib
abs
Rethinking Loss Functions for Fact Verification
Yuta Mukobara
|
Yutaro Shigeto
|
Masashi Shimbo
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 2: Short Papers)
We explore loss functions for fact verification in the FEVER shared task. While the cross-entropy loss is a standard objective for training verdict predictors, it fails to capture the heterogeneity among the FEVER verdict classes. In this paper, we develop two task-specific objectives tailored to FEVER. Experimental results confirm that the proposed objective functions outperform the standard cross-entropy. Performance is further improved when these objectives are combined with simple class weighting, which effectively overcomes the imbalance in the training data. The source code is available (https://github.com/yuta-mukobara/RLF-KGAT).
2020
pdf
bib
abs
Video Caption Dataset for Describing Human Actions in Japanese
Yutaro Shigeto
|
Yuya Yoshikawa
|
Jiaqing Lin
|
Akikazu Takeuchi
Proceedings of the Twelfth Language Resources and Evaluation Conference
In recent years, automatic video caption generation has attracted considerable attention. This paper focuses on the generation of Japanese captions for describing human actions. While most currently available video caption datasets have been constructed for English, there is no equivalent Japanese dataset. To address this, we constructed a large-scale Japanese video caption dataset consisting of 79,822 videos and 399,233 captions. Each caption in our dataset describes a video in the form of “who does what and where.” To describe human actions, it is important to identify the details of a person, place, and action. Indeed, when we describe human actions, we usually mention the scene, person, and action. In our experiments, we evaluated two caption generation methods to obtain benchmark results. Further, we investigated whether those generation methods could specify “who does what and where.”
2017
pdf
bib
abs
STAIR Captions: Constructing a Large-Scale Japanese Image Caption Dataset
Yuya Yoshikawa
|
Yutaro Shigeto
|
Akikazu Takeuchi
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
In recent years, automatic generation of image descriptions (captions), that is, image captioning, has attracted a great deal of attention. In this paper, we particularly consider generating Japanese captions for images. Since most available caption datasets have been constructed for English language, there are few datasets for Japanese. To tackle this problem, we construct a large-scale Japanese image caption dataset based on images from MS-COCO, which is called STAIR Captions. STAIR Captions consists of 820,310 Japanese captions for 164,062 images. In the experiment, we show that a neural network trained using STAIR Captions can generate more natural and better Japanese captions, compared to those generated using English-Japanese machine translation after generating English captions.
2013
pdf
bib
Construction of English MWE Dictionary and its Application to POS Tagging
Yutaro Shigeto
|
Ai Azuma
|
Sorami Hisamoto
|
Shuhei Kondo
|
Tomoya Kose
|
Keisuke Sakaguchi
|
Akifumi Yoshimoto
|
Frances Yung
|
Yuji Matsumoto
Proceedings of the 9th Workshop on Multiword Expressions