Jinkun Chen
2024
Long-form evaluation of model editing
Domenic Rosati
|
Robie Gonzales
|
Jinkun Chen
|
Xuemin Yu
|
Yahya Kayani
|
Frank Rudzicz
|
Hassan Sajjad
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Evaluations of model editing, a technique for changing the factual knowledge held by Large Language Models (LLMs), currently only use the ‘next few token’ completions after a prompt. As a result, the impact of these methods on longer natural language generation is largely unknown. We introduce long-form evaluation of model editing (LEME) a novel evaluation protocol that measures the efficacy and impact of model editing in long-form generative settings. Our protocol consists of a machine-rated survey and a classifier which correlates well with human ratings. Importantly, we find that our protocol has very little relationship with previous short-form metrics (despite being designed to extend efficacy, generalization, locality, and portability into a long-form setting), indicating that our method introduces a novel set of dimensions for understanding model editing methods. Using this protocol, we benchmark a number of model editing techniques and present several findings including that, while some methods (ROME and MEMIT) perform well in making consistent edits within a limited scope, they suffer much more from factual drift than other methods. Finally, we present a qualitative analysis that illustrates common failure modes in long-form generative settings including internal consistency, lexical cohesion, and locality issues.
2018
The Sogou-TIIC Speech Translation System for IWSLT 2018
Yuguang Wang
|
Liangliang Shi
|
Linyu Wei
|
Weifeng Zhu
|
Jinkun Chen
|
Zhichao Wang
|
Shixue Wen
|
Wei Chen
|
Yanfeng Wang
|
Jia Jia
Proceedings of the 15th International Conference on Spoken Language Translation
This paper describes our speech translation system for the IWSLT 2018 Speech Translation of lectures and TED talks from English to German task. The pipeline approach is employed in our work, which mainly includes the Automatic Speech Recognition (ASR) system, a post-processing module, and the Neural Machine Translation (NMT) system. Our ASR system is an ensemble system of Deep-CNN, BLSTM, TDNN, N-gram Language model with lattice rescoring. We report average results on tst2013, tst2014, tst2015. Our best combination system has an average WER of 6.73. The machine translation system is based on Google’s Transformer architecture. We achieved an improvement of 3.6 BLEU over baseline system by applying several techniques, such as cleaning parallel corpus, fine tuning of single model, ensemble models and re-scoring with additional features. Our final average result on speech translation is 31.02 BLEU.
Search
Co-authors
- Yuguang Wang 1
- Liangliang Shi 1
- Linyu Wei 1
- Weifeng Zhu 1
- Zhichao Wang 1
- show all...