Jingyuan Chen
2024
MPCoder: Multi-user Personalized Code Generator with Explicit and Implicit Style Representation Learning
Zhenlong Dai
|
Chang Yao
|
WenKang Han
|
Yuanying Yuanying
|
Zhipeng Gao
|
Jingyuan Chen
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large Language Models (LLMs) have demonstrated great potential for assisting developers in their daily development. However, most research focuses on generating correct code, how to use LLMs to generate personalized code has seldom been investigated. To bridge this gap, we proposed MPCoder (Multi-user Personalized Code Generator) to generate personalized code for multiple users. To better learn coding style features, we utilize explicit coding style residual learning to capture the syntax code style standards and implicit style learning to capture the semantic code style conventions. We train a multi-user style adapter to better differentiate the implicit feature representations of different users through contrastive learning, ultimately enabling personalized code generation for multiple users. We further propose a novel evaluation metric for estimating similarities between codes of different coding styles. The experimental results show the effectiveness of our approach for this novel task.
2018
Temporally Grounding Natural Sentence in Video
Jingyuan Chen
|
Xinpeng Chen
|
Lin Ma
|
Zequn Jie
|
Tat-Seng Chua
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
We introduce an effective and efficient method that grounds (i.e., localizes) natural sentences in long, untrimmed video sequences. Specifically, a novel Temporal GroundNet (TGN) is proposed to temporally capture the evolving fine-grained frame-by-word interactions between video and sentence. TGN sequentially scores a set of temporal candidates ended at each frame based on the exploited frame-by-word interactions, and finally grounds the segment corresponding to the sentence. Unlike traditional methods treating the overlapping segments separately in a sliding window fashion, TGN aggregates the historical information and generates the final grounding result in one single pass. We extensively evaluate our proposed TGN on three public datasets with significant improvements over the state-of-the-arts. We further show the consistent effectiveness and efficiency of TGN through an ablation study and a runtime test.
Search
Co-authors
- Zhenlong Dai 1
- Chang Yao 1
- WenKang Han 1
- Yuanying Yuanying 1
- Zhipeng Gao 1
- show all...