2024
pdf
bib
abs
i-Code V2: An Autoregressive Generation Framework over Vision, Language, and Speech Data
Ziyi Yang
|
Mahmoud Khademi
|
Yichong Xu
|
Reid Pryzant
|
Yuwei Fang
|
Chenguang Zhu
|
Dongdong Chen
|
Yao Qian
|
Xuemei Gao
|
Yi-Ling Chen
|
Robert Gmyr
|
Naoyuki Kanda
|
Noel Codella
|
Bin Xiao
|
Yu Shi
|
Lu Yuan
|
Takuya Yoshioka
|
Michael Zeng
|
Xuedong Huang
Findings of the Association for Computational Linguistics: NAACL 2024
The convergence of text, visual, and audio data is crucial towards human-like artificial intelligence, however the current Vision-Language-Speech landscape is dominated by encoder-only models that lack generative abilities. We propose closing this gap with i-Code V2, one of the first models capable of generating natural language from any combination of Vision, Language, and Speech data. i-Code V2 leverages state-of-the-art single-modality encoders, combining their outputs with a new modality-fusing encoder to project combinations of modalities into a shared representational space. Language tokens are generated from these representations via an autoregressive decoder. i-Code V2 is pretrained end-to-end on a large collection of dual- and single-modality datasets with a novel text completion objective that can be generalized across arbitrary combinations of modalities. i-Code V2 matches or outperforms state-of-the-art single- and dual-modality baselines on 7 multimodal tasks, demonstrating the power of generative multimodal pretraining across a diversity of tasks and signals.
2014
pdf
bib
abs
The NCT ASR system for IWSLT 2014
Peng Shen
|
Yugang Lu
|
Xinhui Hu
|
Naoyuki Kanda
|
Masahiro Saiko
|
Chiori Hori
Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign
This paper describes our automatic speech recognition system for IWSLT2014 evaluation campaign. The system is based on weighted finite-state transducers and a combination of multiple subsystems which consists of four types of acoustic feature sets, four types of acoustic models, and N-gram and recurrent neural network language models. Compared with our system used in last year, we added additional subsystems based on deep neural network modeling on filter bank feature and convolutional deep neural network modeling on filter bank feature with tonal features. In addition, modifications and improvements on automatic acoustic segmentation and deep neural network speaker adaptation were applied. Compared with our last year’s system on speech recognition experiments, our new system achieved 21.5% relative improvement on word error rate on the 2013 English test data set.
2006
pdf
bib
Multi-Domain Spoken Dialogue System with Extensibility and Robustness against Speech Recognition Errors
Kazunori Komatani
|
Naoyuki Kanda
|
Mikio Nakano
|
Kazuhiro Nakadai
|
Hiroshi Tsujino
|
Tetsuya Ogata
|
Hiroshi G. Okuno
Proceedings of the 7th SIGdial Workshop on Discourse and Dialogue