Maozong Zheng
2023
AntContentTech at SemEval-2023 Task 6: Domain-adaptive Pretraining and Auxiliary-task Learning for Understanding Indian Legal Texts
Jingjing Huo
|
Kezun Zhang
|
Zhengyong Liu
|
Xuan Lin
|
Wenqiang Xu
|
Maozong Zheng
|
Zhaoguo Wang
|
Song Li
Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)
The objective of this shared task is to gain an understanding of legal texts, and it is beset with difficulties such as the comprehension of lengthy noisy legal documents, domain specificity as well as the scarcity of annotated data. To address these challenges, we propose a system that employs a hierarchical model and integrates domain-adaptive pretraining, data augmentation, and auxiliary-task learning techniques. Moreover, to enhance generalization and robustness, we ensemble the models that utilize these diverse techniques. Our system ranked first on the RR sub-task and in the middle for the other two sub-tasks.
ViT-TTS: Visual Text-to-Speech with Scalable Diffusion Transformer
Huadai Liu
|
Rongjie Huang
|
Xuan Lin
|
Wenqiang Xu
|
Maozong Zheng
|
Hong Chen
|
Jinzheng He
|
Zhou Zhao
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Text-to-speech(TTS) has undergone remarkable improvements in performance, particularly with the advent of Denoising Diffusion Probabilistic Models (DDPMs). However, the perceived quality of audio depends not solely on its content, pitch, rhythm, and energy, but also on the physical environment.In this work, we propose ViT-TTS, the first visual TTS model with scalable diffusion transformers. ViT-TTS complement the phoneme sequence with the visual information to generate high-perceived audio, opening up new avenues for practical applications of AR and VR to allow a more immersive and realistic audio experience. To mitigate the data scarcity in learning visual acoustic information, we 1) introduce a self-supervised learning framework to enhance both the visual-text encoder and denoiser decoder; 2) leverage the diffusion transformer scalable in terms of parameters and capacity to learn visual scene information. Experimental results demonstrate that ViT-TTS achieves new state-of-the-art results, outperforming cascaded systems and other baselines regardless of the visibility of the scene. With low-resource data (1h, 2h, 5h), ViT-TTS achieves comparative results with rich-resource baselines.
Search
Co-authors
- Xuan Lin 2
- Wenqiang Xu 2
- Jingjing Huo 1
- Kezun Zhang 1
- Zhengyong Liu 1
- show all...