Weizhi Wang
2022
Non-Parametric Domain Adaptation for End-to-End Speech Translation
Yichao Du
|
Weizhi Wang
|
Zhirui Zhang
|
Boxing Chen
|
Tong Xu
|
Jun Xie
|
Enhong Chen
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
The end-to-end speech translation (E2E-ST) has received increasing attention due to the potential of its less error propagation, lower latency and fewer parameters. However, the effectiveness of neural-based approaches to this task is severely limited by the available training corpus, especially for domain adaptation where in-domain triplet data is scarce or nonexistent. In this paper, we propose a novel non-parametric method that leverages in-domain text translation corpus to achieve domain adaptation for E2E-ST systems. To this end, we first incorporate an additional encoder into the pre-trained E2E-ST model to realize text translation modeling, based on which the decoder’s output representations for text and speech translation tasks are unified by reducing the correspondent representation mismatch in available triplet training data. During domain adaptation, a k-nearest-neighbor (kNN) classifier is introduced to produce the final translation distribution using the external datastore built by the domain-specific text translation corpus, while the universal output representation is adopted to perform a similarity search. Experiments on the Europarl-ST benchmark demonstrate that when in-domain text translation data is involved only, our proposed approach significantly improves baseline by 12.82 BLEU on average in all translation directions, even outperforming the strong in-domain fine-tuning strategy.
2021
Rethinking Zero-shot Neural Machine Translation: From a Perspective of Latent Variables
Weizhi Wang
|
Zhirui Zhang
|
Yichao Du
|
Boxing Chen
|
Jun Xie
|
Weihua Luo
Findings of the Association for Computational Linguistics: EMNLP 2021
Zero-shot translation, directly translating between language pairs unseen in training, is a promising capability of multilingual neural machine translation (NMT). However, it usually suffers from capturing spurious correlations between the output language and language invariant semantics due to the maximum likelihood training objective, leading to poor transfer performance on zero-shot translation. In this paper, we introduce a denoising autoencoder objective based on pivot language into traditional training objective to improve the translation accuracy on zero-shot directions. The theoretical analysis from the perspective of latent variables shows that our approach actually implicitly maximizes the probability distributions for zero-shot directions. On two benchmark machine translation datasets, we demonstrate that the proposed method is able to effectively eliminate the spurious correlations and significantly outperforms state-of-the-art methods with a remarkable performance. Our code is available at https://github.com/Victorwz/zs-nmt-dae.
2014
Clustering tweets usingWikipedia concepts
Guoyu Tang
|
Yunqing Xia
|
Weizhi Wang
|
Raymond Lau
|
Fang Zheng
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
Two challenging issues are notable in tweet clustering. Firstly, the sparse data problem is serious since no tweet can be longer than 140 characters. Secondly, synonymy and polysemy are rather common because users intend to present a unique meaning with a great number of manners in tweets. Enlightened by the recent research which indicates Wikipedia is promising in representing text, we exploit Wikipedia concepts in representing tweets with concept vectors. We address the polysemy issue with a Bayesian model, and the synonymy issue by exploiting the Wikipedia redirections. To further alleviate the sparse data problem, we further make use of three types of out-links in Wikipedia. Evaluation on a twitter dataset shows that the concept model outperforms the traditional VSM model in tweet clustering.
Search
Co-authors
- Yichao Du 2
- Zhirui Zhang 2
- Boxing Chen 2
- Jun Xie 2
- Guoyu Tang 1
- show all...