Gyeonghun Kim
2023
Local Temperature Beam Search: Avoid Neural Text DeGeneration via Enhanced Calibration
Dongkyu Lee
|
Gyeonghun Kim
|
Janghoon Han
|
Taesuk Hong
|
Yi-Reun Kim
|
Stanley Jungkyu Choi
|
Nevin L. Zhang
Findings of the Association for Computational Linguistics: ACL 2023
Previous studies have constantly observed that a language model repeats itself, creating repetitions in an output sequence. To cope with the issue, stochastic decoding schemes have been the de facto approaches; the strategies add randomness in inference, hence avoiding the “self-loop”. However, the remedy comes at the cost of sacrificing output quality due to the randomness involved. In this work, we introduce a deterministic decoding scheme, local temperature beam search. This inference algorithm is an embarrassingly simple variant of beam search, yet it reduces repetition, whose level is superior to that of a sampling-based decoding algorithm, while maintaining the level of coherence as in beam search. Our idea is rooted in the concept of model calibration; we view a repetition as a casualty from overconfidence in a model. Therefore, our work mitigates the miscalibration present in the course of inference with a post-calibration approach applied in beam-specific manner. Our inference scheme is validated on text completion tasks, in which the repetition problem is seen most clearly, and is exhaustively compared with existing inference schemes.
2022
TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models
Joel Jang
|
Seonghyeon Ye
|
Changho Lee
|
Sohee Yang
|
Joongbo Shin
|
Janghoon Han
|
Gyeonghun Kim
|
Minjoon Seo
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Language Models (LMs) become outdated as the world changes; they often fail to perform tasks requiring recent factual information which was absent or different during training, a phenomenon called temporal misalignment. This is especially a challenging problem because the research community still lacks a coherent dataset for assessing the adaptability of LMs to frequently-updated knowledge corpus such as Wikipedia. To this end, we introduce TemporalWiki, a lifelong benchmark for ever-evolving LMs that utilizes the difference between consecutive snapshots of English Wikipedia and English Wikidata for training and evaluation, respectively. The benchmark hence allows researchers to periodically track an LM’s ability to retain previous knowledge and acquire updated/new knowledge at each point in time. We also find that training an LM on the diff data through continual learning methods achieves similar or better perplexity than on the entire snapshot in our benchmark with 12 times less computational cost, which verifies that factual knowledge in LMs can be safely updated with minimal training data via continual learning.
Search
Co-authors
- Janghoon Han 2
- Dongkyu Lee 1
- Taesuk Hong 1
- Yi-Reun Kim 1
- Stanley Jungkyu Choi 1
- show all...