Xinhui Hu


2023

pdf bib
Hybrid-Regressive Paradigm for Accurate and Speed-Robust Neural Machine Translation
Qiang Wang | Xinhui Hu | Ming Chen
Findings of the Association for Computational Linguistics: ACL 2023

This work empirically confirms that non-autoregressive translation (NAT) is less robust in decoding batch size and hardware settings than autoregressive translation (AT). To address this issue, we demonstrate that prompting a small number of AT predictions can significantly reduce the performance gap between AT and NAT through synthetic experiments. Following this line, we propose hybrid-regressive translation (HRT), a two-stage translation prototype that combines the strengths of AT and NAT. Specifically, HRT first generates discontinuous sequences via autoregression (e.g., make a prediction for every k tokens, k>1) and then fills in all previously skipped tokens at once in a non-autoregressive manner. Experiments on five translation tasks show that HRT achieves comparable translation quality with AT while having at least 1.5x faster inference regardless of batch size and device. Additionally, HRT successfully inherits the sound characteristics of AT in the deep-encoder-shallow-decoder architecture, allowing for further speedup without BLEU loss.

2014

pdf bib
The NCT ASR system for IWSLT 2014
Peng Shen | Yugang Lu | Xinhui Hu | Naoyuki Kanda | Masahiro Saiko | Chiori Hori
Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign

This paper describes our automatic speech recognition system for IWSLT2014 evaluation campaign. The system is based on weighted finite-state transducers and a combination of multiple subsystems which consists of four types of acoustic feature sets, four types of acoustic models, and N-gram and recurrent neural network language models. Compared with our system used in last year, we added additional subsystems based on deep neural network modeling on filter bank feature and convolutional deep neural network modeling on filter bank feature with tonal features. In addition, modifications and improvements on automatic acoustic segmentation and deep neural network speaker adaptation were applied. Compared with our last year’s system on speech recognition experiments, our new system achieved 21.5% relative improvement on word error rate on the 2013 English test data set.

2009

pdf bib
Construction of Chinese Segmented and POS-tagged Conversational Corpora and Their Evaluations on Spontaneous Speech Recognitions
Xinhui Hu | Ryosuke Isotani | Satoshi Nakamura
Proceedings of the 7th Workshop on Asian Language Resources (ALR7)

2007

pdf bib
Learning Unsupervised SVM Classifier for Answer Selection in Web Question Answering
Youzheng Wu | Ruiqiang Zhang | Xinhui Hu | Hideki Kashioka
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)