Yewen Pu
2025
mrCAD: Multimodal Communication to Refine Computer-aided Designs
William P McCarthy
|
Saujas Vaduguru
|
Karl D.d. Willis
|
Justin Matejka
|
Judith E Fan
|
Daniel Fried
|
Yewen Pu
Findings of the Association for Computational Linguistics: EMNLP 2025
In collaborative creation tasks, people steer artifacts towards specific goals by _refining_ them with _multimodal_ communication over multiple rounds of interaction. In contrast, generative AI excels at creating artifacts in a single turn but can struggle to make precise refinements that match our design intent. To close this gap, we present mrCAD, a dataset of multi-turn interactions in which pairs of humans iteratively created and refined computer-aided designs (CADs). In each game, a _Designer sent instructions to a _Maker_, explaining how to create and subsequently refine a CAD to match a target design that only the _Designer_ could see. mrCAD consists of 6,082 communication games, 15,163 instruction-execution rounds, played between 1,092 pairs of human players. Crucially, _Designers_ had access to two communication modalities – text and drawing. Analysis finds that players relied more on text in refinement than in initial generation instructions, and used different linguistic elements for refinement than for generation. We also find that state-of-the-art VLMs are better at following generation instructions than refinement instructions. These results lay the foundation for modeling multi-turn, multimodal communication not captured in prior datasets.
2022
Text Editing as Imitation Game
Ning Shi
|
Bin Tang
|
Bo Yuan
|
Longtao Huang
|
Yewen Pu
|
Jie Fu
|
Zhouhan Lin
Findings of the Association for Computational Linguistics: EMNLP 2022
Text editing, such as grammatical error correction, arises naturally from imperfect textual data. Recent works frame text editing as a multi-round sequence tagging task, where operations – such as insertion and substitution – are represented as a sequence of tags. While achieving good results, this encoding is limited in flexibility as all actions are bound to token-level tags. In this work, we reformulate text editing as an imitation game using behavioral cloning. Specifically, we convert conventional sequence-to-sequence data into state-to-action demonstrations, where the action space can be as flexible as needed. Instead of generating the actions one at a time, we introduce a dual decoders structure to parallel the decoding while retaining the dependencies between action tokens, coupled with trajectory augmentation to alleviate the distribution shift that imitation learning often suffers. In experiments on a suite of Arithmetic Equation benchmarks, our model consistently outperforms the autoregressive baselines in terms of performance, efficiency, and robustness. We hope our findings will shed light on future studies in reinforcement learning applying sequence-level action generation to natural language processing.
Search
Fix author
Co-authors
- Judith E Fan 1
- Daniel Fried 1
- Jie Fu 1
- Longtao Huang 1
- Zhouhan Lin 1
- show all...