Yuchen Ren
2025
Biology-Instructions: A Dataset and Benchmark for Multi-Omics Sequence Understanding Capability of Large Language Models
Haonan He
|
Yuchen Ren
|
Yining Tang
|
Ziyang Xu
|
Junxian Li
|
Minghao Yang
|
Di Zhang
|
Yuan Dong
|
Tao Chen
|
Shufei Zhang
|
Yuqiang Li
|
Nanqing Dong
|
Wanli Ouyang
|
Dongzhan Zhou
|
Peng Ye
Findings of the Association for Computational Linguistics: EMNLP 2025
Large language models (LLMs) have shown remarkable capabilities in general domains, but their application to multi-omics biology remains underexplored. To address this gap, we introduce Biology-Instructions, the first large-scale instruction-tuning dataset for multi-omics biological sequences, including DNA, RNA, proteins, and multi-molecules. This dataset bridges LLMs and complex biological sequence-related tasks, enhancing their versatility and reasoning while maintaining conversational fluency. We also highlight significant limitations of current state-of-the-art LLMs on multi-omics tasks without specialized training. To overcome this, we propose ChatMultiOmics, a strong baseline with a novel three-stage training pipeline, demonstrating superior biological understanding through Biology-Instructions. Both resources are publicly available, paving the way for better integration of LLMs in multi-omics analysis. The Biology-Instructions is publicly available at: https://github.com/hhnqqq/Biology-Instructions.
2023
Improved Visual Story Generation with Adaptive Context Modeling
Zhangyin Feng
|
Yuchen Ren
|
Xinmiao Yu
|
Xiaocheng Feng
|
Duyu Tang
|
Shuming Shi
|
Bing Qin
Findings of the Association for Computational Linguistics: ACL 2023
Diffusion models developed on top of powerful text-to-image generation models like Stable Diffusion achieve remarkable success in visual story generation. However, the best-performing approach considers historically generated results as flattened memory cells, ignoring the fact that not all preceding images contribute equally to the generation of the characters and scenes at the current stage. To address this, we present a simple method that improves the leading system with adaptive context modeling, which is not only incorporated in the encoder but also adopted as additional guidance in the sampling stage to boost the global consistency of the generated story. We evaluate our model on PororoSV and FlintstonesSV datasets and show that our approach achieves state-of-the-art FID scores on both story visualization and continuation scenarios. We conduct detailed model analysis and show that our model excels at generating semantically consistent images for stories.
Search
Fix author
Co-authors
- Tao Chen 1
- Yuan Dong 1
- Nanqing Dong 1
- Zhangyin Feng (冯掌印) 1
- Xiaocheng Feng (冯骁骋) 1
- show all...