Jingyu Liu
2024
Effective Long-Context Scaling of Foundation Models
Wenhan Xiong
|
Jingyu Liu
|
Igor Molybog
|
Hejia Zhang
|
Prajjwal Bhargava
|
Rui Hou
|
Louis Martin
|
Rashi Rungta
|
Karthik Abinav Sankararaman
|
Barlas Oguz
|
Madian Khabsa
|
Han Fang
|
Yashar Mehdad
|
Sharan Narang
|
Kshitiz Malik
|
Angela Fan
|
Shruti Bhosale
|
Sergey Edunov
|
Mike Lewis
|
Sinong Wang
|
Hao Ma
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
We present an effective recipe to train strong long-context LLMs that are capable of utilizing massive context windows of up to 32,000 tokens. Our models are built through continual pretraining from Llama 2 checkpoints with longer text sequences and on a dataset where long texts are upsampled. We perform extensive evaluation using language modeling, synthetic context probing tasks, and a wide range of downstream benchmarks. Across all evaluations, our models achieve consistent improvements on most regular-context tasks and significant improvements on long-context tasks over Llama 2. Moreover, with a cost-effective instruction tuning procedure that is free of expensive annotation, the presented models can already surpass gpt-3.5-turbo-16k‘s overall performance on long-context benchmarks. Alongside these results, we provide an in-depth analysis on each individual component of our method. We delve into Llama’s position encodings and discuss its key limitation in modeling long data. We examine the impact of various design choices in the pretraining process, including the data mix and the training curriculum of sequence lengths – ablation results suggest that having abundant long texts in the pretrain dataset is not the key to achieving strong performance, and we empirically verify that long context continual pretraining is more efficient and similarly effective compared to pretraining from scratch with long sequences.
2023
Text-guided 3D Human Generation from 2D Collections
Tsu-Jui Fu
|
Wenhan Xiong
|
Yixin Nie
|
Jingyu Liu
|
Barlas Oguz
|
William Wang
Findings of the Association for Computational Linguistics: EMNLP 2023
3D human modeling has been widely used for engaging interaction in gaming, film, and animation. The customization of these characters is crucial for creativity and scalability, which highlights the importance of controllability. In this work, we introduce Text-guided 3D Human Generation (T3H), where a model is to generate a 3D human, guided by the fashion description. There are two goals: 1) the 3D human should render articulately, and 2) its outfit is controlled by the given text. To address this T3H task, we propose Compositional Cross-modal Human (CCH). CCH adopts cross-modal attention to fuse compositional human rendering with the extracted fashion semantics. Each human body part perceives relevant textual guidance as its visual patterns. We incorporate the human prior and semantic discrimination to enhance 3D geometry transformation and fine-grained consistency, enabling it to learn from 2D collections for data efficiency. We conduct evaluations on DeepFashion and SHHQ with diverse fashion attributes covering the shape, fabric, and color of upper and lower clothing. Extensive experiments demonstrate that CCH achieves superior results for T3H with high efficiency.
Search
Co-authors
- Wenhan Xiong 2
- Barlas Oguz 2
- Tsu-Jui Fu 1
- Yixin Nie 1
- William Wang 1
- show all...